Adam Ash

Your daily entertainment scout. Whatever is happening out there, you'll find the best writing about it in here.

Wednesday, May 31, 2006

US Diary: all agog about Haditha, the Iraq War's "My Lai"

Well, the storm about Haditha, the alleged massacre of Iraqi civilians by US marines, is breaking about our ears. I'm sick of it already, but tomorrow I'll provide a number of posts from all over the Net about it. Just so you can get a jump on the discussion, and also, because it really drives me crazy to think about it, and I want to get it out of the way.
Hell, the Iraq War is about more than the slaughter of these particular innocent women and children. Since the war started, we've probably bombed more than 10,000 innocent women and children to death. This incident reminds us that war is murder, and that's what we've been doing over there -- murdering Iraqis, who were no threat to us ever. When old men send young men with guns some place to kill people, there are going to be atrocities, for sure. The real atrocity is those old men sending those young men in the first place.

US Diary: the Bush outbreak of humility comes too late, says this guy

From Hubris to Humility -- by Derrick Z. Jackson

President Bush's outbreak of humility was too late. In a joint press conference last week with British Prime Minister Tony Blair, Bush dropped the words ``missteps" and ``mistakes" into his opening remarks. This was a different Bush than the one who refused in a press conference two years ago to come up with a single mistake he had made after 9/11.

Asked by a reporter what missteps and mistakes he regretted the most, Bush said he wished he had not ``sent the wrong signal to people" by goading Iraqi rebels with ``bring 'em on" or declaring he wanted Osama bin Laden ``dead or alive." Bush said the biggest mistake of the Iraqi occupation was the prisoner abuse scandal at the Abu Ghraib prison.

It was just Bush's plummeting luck that his touchy-feely moment was upstaged by the hardening evidence of an American atrocity in Iraq. Last November, Marines allegedly went house to house in Haditha to kill 24 Iraqi civilians, including a 4-year-old boy, a 66-year-old woman ( shot in the back), and a 77-year-old man in a wheelchair ( shot in the chest). They were apparently killed execution-style by soldiers enraged when Lance Corporal Miguel Terrazas of El Paso, Texas, was killed by a roadside bomb.

The original story the military gave was that a Marine in a convoy and 15 Iraqi civilians were all killed by the bomb. Gunmen followed up the blast by attacking the convoy. The Marines said they shot back, killing eight ``insurgents" and wounding another.

Increasingly, in a Time report two months ago and a flurry of media disclosures last weekend, it became almost undeniable that the Marines engaged in a cover-up of the killings. Sensing a public relations and morale disaster, the Marine commandant, General Michael Hagee, flew to Iraq to talk to the troops about respecting the Geneva Convention.

In a statement before his departure, Hagee said he would ask soldiers to guard against ``becoming indifferent to the loss of a human life as well as bringing dishonor upon ourselves." He said, ``We do not employ force just for the sake of employing force. We use lethal force only when justified, proportional, and, most importantly, lawful." Hagee added, ``To a Marine, honor is more than just honesty, it means having uncompromising personal integrity and being accountable for all actions."

As in all such matters, one has to stop for a moment and acknowledge the majority of soldiers who are doing their job with courage and honor, those who would stop and pick up a dropped book for a child or a coin for an elder. But there is also no getting around the fact that Bush is being betrayed by his war, one that lacked from its outset the personal integrity, accountability, and honor the command er in chief demands of his soldiers.

Humility looks thin in a war that began without a moral underpinning, on the claim we were ridding the world of Iraqi leader Saddam Hussein's weapons of mass destruction. On the basis of that claim, this nation rained down death on tens of thousands of Iraqi civilians, at least 30,000 by Bush's own admission. It continues in the chaotic deaths of scores of people every few days and at a monthly financial cost that the Congressional Research Service says is now $8 billion a month, way past the monthly pace of spending for Vietnam.

Bush's humility looks thinner still when others in his administration continue to talk with a frightening certitude. On the same day that newspapers reported on events in Haditha, Vice President Dick Cheney told graduates of the US Naval Academy that the enemy in the war on terror ``is as brutal and heartless as any we have ever faced. The enemy wears no uniform, has no regard for the rules of warfare, and is unconstrained by any standard of decency or morality. They plot and plan in secret, target the defenseless, and rejoice at the death of innocent and unsuspecting human beings. . . . they hate us, they hate our country, and they hate the liberties for which we stand."

Haditha and our needless killing of tens of thousands of Iraqis continue to turn Cheney's words on their head. Probably more than a few of our soldiers understand that our invasion was unconstrained by decency. It should not surprise us that a few of them may have turned their hatred of being in Iraq into a door-to-door killing spree of the innocent.

Teenage sex is supposed to be a risk in the US. In Europe, it's just good fun

Teenager Sex In Western Europe – by Elizabeth Agnvall

Pierre-Andre Michaud, chief of the Multidisciplinary Unit for Adolescent Health at the University of Lausanne Hospital in Switzerland and a leading researcher in European teen sexuality, dismisses the idea--widely held in the United States--that sex constitutes risky behavior for teens. In an editorial in May's Journal of Adolescent Health, he wrote:

"In many European countries -- Switzerland in particular -- sexual intercourse, at least from the age of 15 or 16 years, is considered acceptable and even part of normative adolescent behavior." Switzerland, he noted, has one of the world's lowest rates of abortion and teen pregnancy. Teens there, like those in Sweden and the Netherlands, have easy access to contraceptives, confidential health care and comprehensive sex education.

A 2001 Guttmacher Institute report, drawing on data from 30 countries in Western and Eastern Europe, concluded: "Societal acceptance of sexual activity among young people, combined with comprehensive and balanced information about sexuality and clear expectations about commitment and prevention childbearing and STDs [sexually transmitted diseases] within teenage relationships, are hallmarks of countries with low levels of adolescent pregnancy, childbearing and STDs." The study cited Sweden as the "clearest of the case-study countries in viewing sexuality among young people as natural and good."

Cecilia EkÈus, a nurse midwife with a PhD in public international health who works with the Institute of Women and Child Health at Karolinska Institute in Stockholm, says Swedish society teaches that sex should occur in a committed relationship "and also that teenagers should use contraceptives, be informed and take responsibility. But in general we are open and positive and think that it's okay."

In Sweden, compulsory sex education starts when children are 10 to 12. Without parental consent, teens can get free medical care, free condoms, prescriptions for inexpensive oral contraceptives and general advice at youth clinics. Emergency contraceptives (the so-called morning-after pill) are available without a prescription.

Religion tends to insert itself less in government policy on sex education, contraception and abortion in Western Europe than in the United States, says Michaud. The Catholic Church exerted minimal influence in Switzerland's AIDS prevention campaign, he said. "All in all, the church has been very tolerant and does not really get involved in sexual matters," Michaud wrote in an e-mail.

Straightforward messages on how to prevent STDs and teen pregnancy help offset the impact on teens of sexually explicit ads, movies and other mass media -- as ubiquitous in Western Europe as in the United States, said Robert Blum, chair of the Department of Population and Family Health at the Johns Hopkins Bloomberg School of Public Health.

Western Europe also attaches more social stigma to teen pregnancy and teen motherhood than do some American sub-cultures, says Bill Albert, spokesman for the National Campaign to Prevent Teen Pregnancy, a U.S. group: "The focus [in Western Europe] is much more on preventing pregnancy and less on sex itself," Albert said.

Although some experts argue that economic, educational and racial diversity in the U.S. distort national figures and invalidate comparisons with more homogenous Western European countries, Michaud said he has studied Swiss teens who have dropped out of high school, used drugs or lived in disadvantaged areas of the country. They tend to use contraception regardless of economic status, he said.

"My feeling is that it is impossible to have a double message toward young people," Michaud said, in a phone interview from his Lausanne office. "You can't say at the same time, 'Be abstinent, it's the only fair, good way, to escape from having HIV . . . and at the same time say, 'Look, if you ever happen to have sex, then please do that and that and that.' You probably have to choose the message."

Abstinence, he said, is not something the Swiss press on teens. "We think it's unfair. It's useless. It's inefficient. We have been advocating the use of the condom . . . and I think that we tend to be successful."

Joan-Carles SurÌs, head of the research group on adolescent medicine at the University of Lausanne, puts it another way:

"The main difference is that in the States sexual activity is considered a risk. Here we consider it a pleasure."

(Elizabeth Agnvall is a frequent contributor to the Health section. Comments: Join Robert Blum, of the Johns Hopkins School of Public Health, in a Live Online chat today at 11 a.m. on handling teen sexuality.)



The TV morning host smiled broadly at Adam. She was respectful. After all, his audience numbers were bigger than hers. And he had just snared the interviewer’s coup of all coups: Jeremiah Luther had agreed to an hour-long interview, one-on-one, with Adam. Just him and Adam for a whole hour. The network had been hyping it for weeks. Adam was looking forward to the taping. Every day he thought of new questions to ask.

“So your show is about Creationism.”

“Yes,” said Adam, and smoothly took up the bait. “But Creationism is just the door. Through this door I have the freedom to steer my guests into any topic of discussion. For example, our very first program went off into a very different tangent.”

“The controversy seems to be raging more than when it started,” the morning host said.

“I know. It’s tantamount to an inordinate wave in public opinion. Some nerve has been struck. I think this speaks to the hunger of our society for a renovation of spiritual values.”

“Where do you stand in the controversy?”

“How do you mean?”

“Do you think that Jeremiah Luther came from a virgin birth, and if so, was he conceived by an angel father or by an even higher being, the highest father of all? Could he be what I have heard one preacher say, a reincarnation of Christ Himself?”

“At the end of the day I tend towards the angel hypothesis,” said Adam. He felt it right to be seen to take a center position, between the faction who said Jeremiah was out of his mind, and the faction who claimed him as the new Christ of the Second Coming.

“That position appears to be drowned out by the factions on either side of it,” said the morning host.

“Jeremiah himself has said that he’s not sure. The point being he feels called by heaven, but he doesn’t know if he is being called by an angel or by the highest Father Himself. I’ve spoken to Jeremiah about this, and he is adamant that he himself is not sure about it. It is rather amazing that he has followers now who are more sure of who he is than he is himself. How would they know better than he? It’s tantamount to spiritual arrogance.”

“They are hungry for the Savior to appear amongst us.”

“Absolutely, it is a measure of their hunger. But I think it would be wise for Jeremiah to come to the security of his knowledge himself first, before his followers jump to conclusions that he isn’t willing to make yet. If I may be permitted an attempt at a prescription, they should wait for his lead. The followers should beware of getting ahead of their leader. And if Jeremiah has a lead, our program will be there with the news. In the final analysis, we’re the show that brings the cutting edge of faith issues to your screen. Keep watching my show, Keep the Faith, for all new developments pertaining to the faith of the nation.”

Adam felt proud of himself. His lecturing style adapted very well to TV. He was fitting smoothly into the power structure. He was becoming a king of the media. Already he was above this host in the TV pecking order. She was sucking up to him like Proctor was.

“What do you think of the latest news about Pat Robertson’s missing penis?” she asked. She was referring to the report that the dead evangelist’s penis had been found, and that it was being auctioned off on eBay. Family members of Pat Robertson were still resisting calls for his corpse to be exhumed to ascertain if his penis was still attached to his body.

“I don’t think about sensationalist nonsense like that,” replied Adam, and looked at the morning host with withering scorn.

This woman was a cretin. He had lowered himself when he agreed to be interviewed by her. He was way above her. He was becoming one of those people who didn’t report the news, or interpret it, or discuss it, but created it. They were bugs, Proctor and she. Hateful little creatures. Even Eve had become a stubborn little hardhead. Why didn’t she submit to him? He was a famous man. Who did she think she was?

Bookplanet: an Updike novel about suicide bombers -- and a movie, and a report

1. In 'Terrorist,' a Cautious Novelist Takes On a New Fear

John Updike is wary of the Internet, concerned that a worm could migrate into his computer and chew up whatever he is working on. In a much-publicized speech recently at BookExpo America, the annual publishing convention, he also took a dim view of the notion of digitizing all books on an enormous online data bank.

For his new novel, "Terrorist," however, he ventured onto the Web to research bomb detonators. He was fairly certain, he remarked recently during an interview in Boston, that the only detonator he could recall — the one that Gary Cooper plunges in "For Whom the Bell Tolls" — must be out of date, but he was also reassured to discover, as he put it, that "the Internet doesn't like you to learn too much about explosives."

While working on the book, Mr. Updike, now 74, white-haired, bushy-browed and senatorial-looking, also risked suspicion by lingering around the luggage-screening machines at La Guardia Airport, where he learned that the X-rays were not in black and white, as he had imagined, but rather in lurid colors: acid green and red.

And he hired a car and a driver to take him around some of the seedier neighborhoods in Paterson, N.J., and to show him some churches and storefronts that had been converted into mosques. "He did his best, but I think I puzzled him as a tour customer," Mr. Updike said.

"Terrorist," which comes out from Alfred A. Knopf next week, is set in Paterson — or, rather, in a slightly smaller, tidier version of the city, called New Prospect — and is about just what the title says. Its protagonist is an 18-year-old named Ahmad, the son of a hippie-ish American mother and an Egyptian exchange student, now absent, who embraces Islam and is eventually recruited to blow up the Lincoln Tunnel.

The new novel is Mr. Updike's 22nd and in some ways a departure. It loosely follows the conventions of a thriller, for example, one of the few forms that Mr. Updike, a jack of nearly all literary trades, had not tried before. And yet as he spoke about "Terrorist" it became clear that the novel also knits together some themes and preoccupations that have been with him almost from the beginning: sex, death, religion, high school and even Paterson itself, which also figures prominently in his novel "In the Beauty of the Lilies" and which Mr. Updike said he sometimes imagines as another version of Reading, Pa., near his hometown, Shillington.

Mr. Updike, who confessed to a mild phobia about tunnels, said the image of an explosion was actually the inspiration for the book. "That picture was the beginning," he added. "The fear of the tunnel being blown up with me in it — the weight of the water crashing in."

Originally, though, he imagined the protagonist as a young Christian, an extension of the troubled teenage character in his early story "Pigeon Feathers," who comes to feel betrayed by a clergyman. "I imagined a young seminarian who sees everyone around him as a devil trying to take away his faith," he said. "The 21st century does look like that, I think, to a great many people in the Arab world."

When Mr. Updike switched the protagonist's religion to Islam, he explained, it was because he "thought he had something to say from the standpoint of a terrorist."

He went on: "I think I felt I could understand the animosity and hatred which an Islamic believer would have for our system. Nobody's trying to see it from that point of view. I guess I have stuck my neck out here in a number of ways, but that's what writers are for, maybe."

He laughed and added: "I sometimes think, 'Why did I do this?' I'm delving into what can be a very sore subject for some people. But when those shadows would cross my mind, I'd say, 'They can't ask for a more sympathetic and, in a way, more loving portrait of a terrorist.' "

Ahmad is lovable, or at least appealing; he's in many ways the most moral and thoughtful character in the entire book, and he gains in vividness from being pictured in that familiar Updikean setting, the American high school.

"It might be that, having gone to high school and having a father who was a high school teacher, that I'm imbued with the ethos," Mr. Updike said. "It occurred to me, though, that a real omission in terms of plausibility is that I don't do enough with cellphones. My school isn't really electrified."

When he was in high school, Mr. Updike added, his own head was "in The New Yorker instead of the Koran," and so while working on "Terrorist" he again picked up that religious text, a book he first read when learning how to impersonate Colonel Ellelloû, the narrator of Mr. Updike's 1978 novel, "The Coup."

"A lot of the Koran does not speak very eloquently to a Westerner," he said. "Much of it is either legalistic or opaquely poetic. There's a lot of hellfire — descriptions of making unbelievers drink molten metal occur more than once. It's not a fuzzy, lovable book, although in the very next verse there can be something quite generous."

"Terrorist" even includes some Koran passages in Arabic transliteration; Shady Nasser, a graduate student, helped Mr. Updike on those sections. "My conscience was pricked by the notion that I was putting into the book something that I can't pronounce," he said, but he added: "Arabic is very twisting, very beautiful. The call to prayer is quite haunting; it almost makes you a believer on the spot. My feeling was, 'This is God's language, and the fact that you don't understand it means you don't know enough about God.' "

For all its theological concerns, "Terrorist" is also an authentic Updike novel, and, thankfully, includes some sheet-rumpled, love-flushed sex scenes between Ahmad's mother, Teresa, and Jack Levy, a guidance counselor at the high school.

"I was happy — because there was so much shaky ground in the writing of this novel — when Jack began to hit on Terry Mulloy," Mr. Updike said. "I felt I was in a scene I could handle. That little romance was very real — to me, at least. I liked those two because they're normal, godless, cynical but amiable modern people."

While waiting for "Terrorist" to come out, Mr. Updike has been working on one of his omnibus volumes. As for what will come after that, he said he was not sure.

"All my life there has been one more thing I think I can do — but only one," he said. "I feel I'm very near the bottom of my barrel at every moment of my career — not like Dostoevsky, who had a notebook full of ideas when he died. I try to see the next book in my mind, and I see a slightly plump book with a lot of people in it, like 'Gosford Park.' But it's not a murder mystery because I'm not clever enough to write one of those."

2. Movie: “The Cult of the Suicide Bomber”
Review by David Denby

Many of us have spent hours at the movies relishing violence and explosions as entertainment. In the documentary “The Cult of the Suicide Bomber,” we see explosions in which real people die, and the sequence comes as a kick in the gut. In 2005, Robert Baer, the C.I.A. case officer whose adventures and misadventures served as the basis for George Clooney’s role in “Syriana,” went around the Middle East with a camera crew, interviewing Lebanese and Israeli intelligence officers and politicians, and the families of suicide bombers and their victims. In between the interviews, Baer and his collaborators, the producer-directors Kevin Toolis and David Batty, drawing on news-agency footage, lay out the historical development of suicide as a weapon—first as a weapon of war, then of terror. The movie is a pageant of fanaticism, sacrifice, and death, and the most striking passage comes near the end. Some of the Lebanese and West Bank bombers were trailed by cameramen, and the footage they recorded—say, of a car bomber taking out an Israeli military patrol—was later used by terrorist organizations as a recruiting and propaganda tool. Baer shows some of those films again and again, and by the end of the sequence I wasn’t sure what enraged me more—the moviemaking terrorists, the blithe idiocy of routine commercial entertainment, or my own complacency in putting up with so much of it.

Baer, who narrates, begins by saying that he is obsessed with the terrorist bombing of the American Embassy in Beirut, in April, 1983, which killed sixty-three people and wiped out most of the C.I.A. station there. How did suicide become so potent a force? He goes through the stages: Ayatollah Khomeini, in the early nineteen-eighties, sanctified very young soldiers’ dying in the defense of Iran, which encouraged a thirteen-year-old boy to strap explosives to his body and blow up an Iraqi tank; Hezbollah used terror against Israelis occupying southern Lebanon in the eighties and nineties; and so on. The filmmakers place each development in its political context, and they trace the increasingly sinister use of religion to justify self-slaughter and murder. Baer can’t say who bombed the Embassy, but he strongly suggests that Iran was behind it, and that Iran has been waging a secret war against American interests for more than a quarter of a century.

That’s the news that Baer threads through “The Cult of the Suicide Bomber.” But the real center of interest, for me, at least, lies in the families of the young men who died. The act by which these kids have fulfilled themselves has ended any possibility that we might attain further knowledge of their temperaments or their souls. What of those who are left behind? An Iranian mother in a black head scarf, referring to her fifteen-year-old son—a photograph shows a slender boy with dark eyes and the faint beginnings of a mustache—who died in battle, says, simply, “He became a martyr for God.” In a city near Tehran, a male relative of a bomber, pointing to a photograph, says, “There’s the martyr Hossein.” Both speak as if the boys had attained a purely official identity, as if they were not their own dead children. “It was a good path for him to take. So why would we stop him?” the mother asks Baer, and there are more remarks, from brothers, sisters, and friends, in praise of the suicide’s duty and rectitude. Other families of young dead warriors may grieve, but these people do not. Did Baer choose them for their ideological purity, or were they the only ones who would talk to him?

The families must be under enormous pressure from Hezbollah, Hamas, and other such organizations to say only the approved things. Still, knowing this, one looks for a fuller response. Did at least one of the bombers’ brothers or sisters harbor such angry thoughts as “My brother was seduced into giving up his life by a cynical and vulgar fantasy of virgins in Paradise”? Or perhaps, in a more analytic vein, did one of them think, “Young men in this society feel they have no future, so why shouldn’t they give up their lives”? Those words, which would suggest a social, rather than a religious, context for the act, are never spoken, or even hinted at. Any kind of psychological explanation is ignored, too. The families utterly reject the word “suicide.” The appropriate word is “martyr,” a bomber’s sister firmly tells Baer. Suicide, it seems, implies the possibility of unhappiness or compulsion, an emotional need that has not been met, whereas martyrdom, as the families present it, is always rationally chosen, and a gift to everyone. The religious language rules out any reason for doing something other than the single reason that is given (American fundamentalists talk the same way).

Hearing this, Baer doesn’t push very hard. He’s in a precarious situation; he enjoys, we imagine, no more than a limited welcome. But his failure to get anything more out of the families frustrates his viewers, and probably frustrates him, too. Near the end of the journey, chronicling Sunni car bombers in Iraq, he talks sorrowfully of Muslims killing Muslims, and he concludes that suicide bombing has lost any coherent political meaning and has taken on an irresistible life of its own as a glamorous cult. And the word he finishes with, to describe the intentions and results of this cult, is “chaos.” But the movie suggests that some kinds of chaos, however much induced by professional terrorists, don’t come about without the consent and support of deeply religious people.

3. CITIZENS – by Steve Coll

In a world amply populated with angry young Muslims, it is a question of some interest why a small number choose to become suicide bombers. President Bush addresses the matter in starkly religious language, consigning it to an eternal contest between good and evil. American scholars have begun to attack the problem with scientific method; Robert Pape, of the University of Chicago, for example, recently mustered data to argue that suicide attacks are a rational means by which the weak can humble the strong. To this potpourri of hypotheses can now be added a compelling work by anonymous bureaucrats in Great Britain, under the oddly redundant title “Report of the Official Account of the Bombings in London on 7th July 2005.”

On that summer morning, three young Muslim men blew themselves up on Underground cars, and a fourth immolated himself on a double-decker bus; fifty-two people died, and several hundred suffered injuries. The most striking aspect of the inquiry into the attacks, which was published earlier this month, is the extent to which it plumbs the suicide bombers’ motivations.

The four men depicted in the report are in some respects unfathomable. When Shehzad Tanweer, a talented athlete who was twenty-two years old, bought snacks at a highway convenience store four hours before his death, he haggled over the change. Hasib Hussain, who was eighteen, strode into a McDonald’s just half an hour before he killed himself and thirteen others. When the four men took leave of one another at King’s Cross station, they hugged, and they appeared, the report says, “happy, even euphoric.” To the credit of the British investigators, however, they were not satisfied merely to observe that the bombers had enrolled in an appealing death cult. They wanted to know how the men were radicalized on British soil—not to excuse them but to aid a strategy of prevention.

Three of the bombers were British citizens, and the report makes plain that this was not just a technical matter of passports or residence status. They were far from destitute; born to Pakistani immigrants, they attended secular state schools and received government and family support throughout their short lives. (The fourth bomber was a Jamaican immigrant who converted to Islam, and whose life was more troubled and erratic.) The ringleader, Mohammad Sidique Khan, even worked for a time at a British welfare agency; later, he was a well-liked teacher’s aide. And yet citizenship failed to compete with the lure of enlistment in an imagined jihadi militia. Khan lost his job just before he began to plan the London bombings. In a video statement left for posthumous broadcast, he emphasized with every pronoun he chose that he had abandoned British identity for service in an enemy army: “Your democratically elected governments continuously perpetuate atrocities against my people all over the world. And your support of them makes you directly responsible.”

The report concludes that there is no consistent profile that could be used to help identify who might be vulnerable to such radicalization, and yet the biographies do show in some detail how the making of an Al Qaeda-inspired suicide bomber is an idiosyncratic narrative of push and pull. Alienation from citizenship or family and a loss of faith in secular opportunity create a pool of potential volunteers; preachers, recruiters, and Al Qaeda leaders take it from there. The British parliament’s main intelligence-oversight committee, in a separate report, admits that Britain has failed to consider adequately how it might reduce the number of potential recruits: “We remain concerned that across the whole of the counter-terrorism community the development of the home-grown threat and the radicalisation of British citizens were not fully understood or applied to strategic thinking.”

It seems obvious that citizenship, assimilation, religious tolerance—the basic ideals of an open, plural society—should play a prominent part in counterterrorism strategy, to at least complement the funds that governments pour so eagerly into concrete barriers, listening devices, and retina scanners. When President Bush promotes the spread of democracy abroad, he argues that the distractions of civic life offer a partial cure for terrorism. Yet he offers no comparable rhetoric, never mind a strategy, at home. The scant outreach to American Muslims that the Bush Administration undertakes, aside from occasional religious conferences and White House iftars , is left mainly to the F.B.I.

The results are predictably depressing: a startling number of America’s several million Muslim residents think that the United States is not safe for them. A poll conducted by Zogby International just before the last Presidential election, for example, showed that more than a third of American Muslims believe that the Administration is waging a war on Islam; a similar number believe that “American society overall is disrespectful and intolerant toward Muslims”; and more than half said that they knew someone who had suffered discrimination. It is fair to assume that these numbers understate the problem. If you were a media-literate Muslim immigrant, would you express your frustration to a pollster on the telephone?

British and European Muslims are more often poor, unemployed, and trapped in segregated housing than their American counterparts, but this hardly seems grounds for self-congratulation or complacency, particularly in this country’s current phase of fence-building and nationalism. The Bush Administration has failed to manage the connection between immigration policy and counterterrorism strategy; instead, the President is succumbing to immigrant-bashers. The nativists are ascendant in the Republican Party as final negotiations begin in Congress on a major immigration-reform bill that is rooted in a demagogic movement to strengthen border security (a worthy objective, but not one likely to be achieved by such craven symbolic gestures as the deployment of National Guard troops). It is not clear whether a new law will be passed this summer, or just how bad a bill it will be. Even if a compromise is reached that offers a reasonable path to citizenship for illegal immigrants who have long lived here, the price will almost certainly include an expansion of police powers and the re-categorizing of many immigration violations as felonies—a prescription for error and abuse.

Homeland Security Secretary Michael Chertoff told the Washington Post last week that it is “very, very hard to detect” a jihadi terrorist who is “purely domestic, self-motivated, self-initiating.” The population in which such radicalization may occur is, of course, the one that is on the receiving end of our hysterical immigration debate. To impress upon immigrants that the federal sheriff and his posse will be riding yet again, that detention and expulsion await those whose papers are not in order, and that obtaining citizenship will prove, at best, a risky ordeal is not only unnecessary and wrong; it is dangerous. Almost five years after September 11th, we remain burdened by a President who believes passionately that he is at war and yet has only the most tenuous grasp of his enemy.

Deep Thoughts: the internet, is it a stupid hive mind, or the potential savior of humankind?

The hive mind is for the most part stupid and boring. Why pay attention to it?

The problem is in the way the Wikipedia has come to be regarded and used; how it's been elevated to such importance so quickly. And that is part of the larger pattern of the appeal of a new online collectivism that is nothing less than a resurgence of the idea that the collective is all-wise, that it is desirable to have influence concentrated in a bottleneck that can channel the collective with the most verity and force. This is different from representative democracy, or meritocracy. This idea has had dreadful consequences when thrust upon us from the extreme Right or the extreme Left in various historical periods. The fact that it's now being re-introduced today by prominent technologists and futurists, people who in many cases I know and like, doesn't make it any less dangerous.

The Hazards of the New Online Collectivism
By Jaron Lanier
An Edge Original Essay


In "Digital Maosim", an original essay written for Edge, computer scientist and digital visionary Jaron Lanier finds fault with what he terms the new online collectivism. He cites as an example the Wikipedia, noting that "reading a Wikipedia entry is like reading the bible closely. There are faint traces of the voices of various anonymous authors and editors, though it is impossible to be sure".

His problem is not with the unfolding experiment of the Wikipedia itself, but "the way the Wikipedia has come to be regarded and used; how it's been elevated to such importance so quickly. And that is part of the larger pattern of the appeal of a new online collectivism that is nothing less than a resurgence of the idea that the collective is all-wise, that it is desirable to have influence concentrated in a bottleneck that can channel the collective with the most verity and force. This is different from representative democracy, or meritocracy. This idea has had dreadful consequences when thrust upon us from the extreme Right or the extreme Left in various historical periods. The fact that it's now being re-introduced today by prominent technologists and futurists, people who in many cases I know and like, doesn't make it any less dangerous".

And he notes that "the Wikipedia is far from being the only online fetish site for foolish collectivism. There's a frantic race taking place online to become the most "Meta" site, to be the highest level aggregator, subsuming the identity of all other sites".

Where is this leading? Lanier calls attention to the "so-called 'Artificial Intelligence' and the race to erase personality and be most Meta. In each case, there's a presumption that something like a distinct kin to individual human intelligence is either about to appear any minute, or has already appeared. The problem with that presumption is that people are all too willing to lower standards in order to make the purported newcomer appear smart. Just as people are willing to bend over backwards and make themselves stupid in order to make an AI interface appear smart (as happens when someone can interact with the notorious Microsoft paper clip,) so are they willing to become uncritical and dim in order to make Meta-aggregator sites appear to be coherent."

Read on as Jaron Lanier throwns a lit Molotov cocktail down towards Palo Alto from up in the Berkeley Hills...


(JARON LANIER:) My Wikipedia entry identifies me (at least this week) as a film director. It is true I made one experimental short film about a decade and a half ago. The concept was awful: I tried to imagine what Maya Deren would have done with morphing. It was shown once at a film festival and was never distributed and I would be most comfortable if no one ever sees it again.

In the real world it is easy to not direct films. I have attempted to retire from directing films in the alternative universe that is the Wikipedia a number of times, but somebody always overrules me. Every time my Wikipedia entry is corrected, within a day I'm turned into a film director again. I can think of no more suitable punishment than making these determined Wikipedia goblins actually watch my one small old movie.

Twice in the past several weeks, reporters have asked me about my filmmaking career. The fantasies of the goblins have entered that portion of the world that is attempting to remain real. I know I've gotten off easy. The errors in my Wikipedia bio have been (at least prior to the publication of this article) charming and even flattering.

Reading a Wikipedia entry is like reading the bible closely. There are faint traces of the voices of various anonymous authors and editors, though it is impossible to be sure. In my particular case, it appears that the goblins are probably members or descendants of the rather sweet old Mondo 2000 culture linking psychedelic experimentation with computers. They seem to place great importance on relating my ideas to those of the psychedelic luminaries of old (and in ways that I happen to find sloppy and incorrect.) Edits deviating from this set of odd ideas that are important to this one particular small subculture are immediately removed. This makes sense. Who else would volunteer to pay that much attention and do all that work?

The problem I am concerned with here is not the Wikipedia in itself. It's been criticized quite a lot, especially in the last year, but the Wikipedia is just one experiment that still has room to change and grow. At the very least it's a success at revealing what the online people with the most determination and time on their hands are thinking, and that's actually interesting information.

No, the problem is in the way the Wikipedia has come to be regarded and used; how it's been elevated to such importance so quickly. And that is part of the larger pattern of the appeal of a new online collectivism that is nothing less than a resurgence of the idea that the collective is all-wise, that it is desirable to have influence concentrated in a bottleneck that can channel the collective with the most verity and force. This is different from representative democracy, or meritocracy. This idea has had dreadful consequences when thrust upon us from the extreme Right or the extreme Left in various historical periods. The fact that it's now being re-introduced today by prominent technologists and futurists, people who in many cases I know and like, doesn't make it any less dangerous.

There was a well-publicized study in Nature last year comparing the accuracy of the Wikipedia to Encyclopedia Britannica. The results were a toss up. While there is a lingering debate about the validity of the study. The items selected for the comparison were just the sort that Wikipedia would do well on: Science topics that the collective at large doesn't care much about. "Kinetic isotope effect" or "Vesalius, Andreas" are examples of topics that make the Britannica hard to maintain, because it takes work to find the right authors to research and review a multitude of diverse topics. But they are perfect for the Wikipedia. There is little controversy around these items, plus the Net provides ready access to a reasonably small number of competent specialist graduate student types possessing the manic motivation of youth.

A core belief of the wiki world is that whatever problems exist in the wiki will be incrementally corrected as the process unfolds. This is analogous to the claims of Hyper-Libertarians who put infinite faith in a free market, or the Hyper-Lefties who are somehow able to sit through consensus decision-making processes. In all these cases, it seems to me that empirical evidence has yielded mixed results. Sometimes loosely structured collective activities yield continuous improvements and sometimes they don't. Often we don't live long enough to find out. Later in this essay I'll point out what constraints make a collective smart. But first, it's important to not lose sight of values just because the question of whether a collective can be smart is so fascinating. Accuracy in a text is not enough. A desirable text is more than a collection of accurate references. It is also an expression of personality.

For instance, most of the technical or scientific information that is in the Wikipedia was already on the Web before the Wikipedia was started. You could always use Google or other search services to find information about items that are now wikified. In some cases I have noticed specific texts get cloned from original sites at universities or labs onto wiki pages. And when that happens, each text loses part of its value. Since search engines are now more likely to point you to the wikified versions, the Web has lost some of its flavor in casual use.

When you see the context in which something was written and you know who the author was beyond just a name, you learn so much more than when you find the same text placed in the anonymous, faux-authoritative, anti-contextual brew of the Wikipedia. The question isn't just one of authentication and accountability, though those are important, but something more subtle. A voice should be sensed as a whole. You have to have a chance to sense personality in order for language to have its full meaning. Personal Web pages do that, as do journals and books. Even Britannica has an editorial voice, which some people have criticized as being vaguely too "Dead White Men."

If an ironic Web site devoted to destroying cinema claimed that I was a filmmaker, it would suddenly make sense. That would be an authentic piece of text. But placed out of context in the Wikipedia, it becomes drivel.

Myspace is another recent experiment that has become even more influential than the Wikipedia. Like the Wikipedia, it adds just a little to the powers already present on the Web in order to inspire a dramatic shift in use. Myspace is all about authorship, but it doesn't pretend to be all-wise. You can always tell at least a little about the character of the person who made a Myspace page. But it is very rare indeed that a Myspace page inspires even the slightest confidence that the author is a trustworthy authority. Hurray for Myspace on that count!

Myspace is a richer, multi-layered, source of information than the Wikipedia, although the topics the two services cover barely overlap. If you want to research a TV show in terms of what people think of it, Myspace will reveal more to you than the analogous and enormous entries in the Wikipedia.

The Wikipedia is far from being the only online fetish site for foolish collectivism. There's a frantic race taking place online to become the most "Meta" site, to be the highest level aggregator, subsuming the identity of all other sites.

The race began innocently enough with the notion of creating directories of online destinations, such as the early incarnations of Yahoo. Then came AltaVista, where one could search using an inverted database of the content of the whole Web. Then came Google, which added page rank algorithms. Then came the blogs, which varied greatly in terms of quality and importance. This lead to Meta-blogs such as Boing Boing, run by identified humans, which served to aggregate blogs. In all of these formulations, real people were still in charge. An individual or individuals were presenting a personality and taking responsibility.

These Web-based designs assumed that value would flow from people. It was still clear, in all such designs, that the Web was made of people, and that ultimately value always came from connecting with real humans.

Even Google by itself (as it stands today) isn't Meta enough to be a problem. One layer of page ranking is hardly a threat to authorship, but an accumulation of many layers can create a meaningless murk, and that is another matter.

In the last year or two the trend has been to remove the scent of people, so as to come as close as possible to simulating the appearance of content emerging out of the Web as if it were speaking to us as a supernatural oracle. This is where the use of the Internet crosses the line into delusion.

Kevin Kelly, the former editor of Whole Earth Review and the founding Executive Editor of Wired , is a friend and someone who has been thinking about what he and others call the "Hive Mind." He runs a Website called Cool Tools that's a cross between a blog and the old Whole Earth Catalog . On Cool Tools , the contributors, including me, are not a hive because we are identified.

In March, Kelly reviewed a variety of "Consensus Web filters" such as "Digg" and "Reddit" that assemble material every day from all the myriad of other aggregating sites. Such sites intend to be more Meta than the sites they aggregate. There is no person taking responsibility for what appears on them, only an algorithm. The hope seems to be that the most Meta site will become the mother of all bottlenecks and receive infinite funding.

That new magnitude of Meta-ness lasted only a month. In April, Kelly reviewed a site called "popurls" that aggregates consensus Web filtering sites...and there was a new "most Meta". We now are reading what a collectivity algorithm derives from what other collectivity algorithms derived from what collectives chose from what a population of mostly amateur writers wrote anonymously.

Is "popurls" any good? I am writing this on May 27, 2006. In the last few days an experimental approach to diabetes management has been announced that might prevent nerve damage. That's huge news for tens of millions of Americans. It is not mentioned on popurls. Popurls does clue us in to this news: "Student sets simultaneous world ice cream-eating record, worst ever ice cream headache." Mainstream news sources all lead today with a serious earthquake in Java. Popurls includes a few mentions of the event, but they are buried within the aggregation of aggregate news sites like Google News. The reason the quake appears on popurls at all can be discovered only if you dig through all the aggregating layers to find the original sources, which are those rare entries actually created by professional writers and editors who sign their names. But at the layer of popurls, the ice cream story and the Javanese earthquake are at best equals, without context or authorship.

Kevin Kelly says of the "popurls" site, "There's no better way to watch the hive mind." But the hive mind is for the most part stupid and boring. Why pay attention to it?

Readers of my previous rants will notice a parallel between my discomfort with so-called "Artificial Intelligence" and the race to erase personality and be most Meta. In each case, there's a presumption that something like a distinct kin to individual human intelligence is either about to appear any minute, or has already appeared. The problem with that presumption is that people are all too willing to lower standards in order to make the purported newcomer appear smart. Just as people are willing to bend over backwards and make themselves stupid in order to make an AI interface appear smart (as happens when someone can interact with the notorious Microsoft paper clip,) so are they willing to become uncritical and dim in order to make Meta-aggregator sites appear to be coherent.

There is a pedagogical connection between the culture of Artificial Intelligence and the strange allure of anonymous collectivism online. Google's vast servers and the Wikipedia are both mentioned frequently as being the startup memory for Artificial Intelligences to come. Larry Page is quoted via a link presented to me by popurls this morning (who knows if it's accurate) as speculating that an AI might appear within Google within a few years. George Dyson has wondered if such an entity already exists on the Net, perhaps perched within Google. My point here is not to argue about the existence of Metaphysical entities, but just to emphasize how premature and dangerous it is to lower the expectations we hold for individual human intellects.

The beauty of the Internet is that it connects people. The value is in the other people. If we start to believe the Internet itself is an entity that has something to say, we're devaluing those people and making ourselves into idiots.

Compounding the problem is that new business models for people who think and write have not appeared as quickly as we all hoped. Newspapers, for instance, are on the whole facing a grim decline as the Internet takes over the feeding of the curious eyes that hover over morning coffee and, even worse, classified ads. In the new environment, Google News is for the moment better funded and enjoys a more secure future than most of the rather small number of fine reporters around the world who ultimately create most of its content. The aggregator is richer than the aggregated.

The question of new business models for content creators on the Internet is a profound and difficult topic in itself, but it must at least be pointed out that writing professionally and well takes time and that most authors need to be paid to take that time. In this regard, blogging is not writing. For example, it's easy to be loved as a blogger. All you have to do is play to the crowd. Or you can flame the crowd to get attention. Nothing is wrong with either of those activities. What I think of as real writing, however, writing meant to last, is something else. It involves articulating a perspective that is not just reactive to yesterday's moves in a conversation.

The artificial elevation of all things Meta is not confined to online culture. It is having a profound influence on how decisions are made in America.

What we are witnessing today is the alarming rise of the fallacy of the infallible collective. Numerous elite organizations have been swept off their feet by the idea. They are inspired by the rise of the Wikipedia, by the wealth of Google, and by the rush of entrepreneurs to be the most Meta. Government agencies, top corporate planning departments, and major universities have all gotten the bug.

As a consultant, I used to be asked to test an idea or propose a new one to solve a problem. In the last couple of years I've often been asked to work quite differently. You might find me and the other consultants filling out survey forms or tweaking edits to a collective essay. I'm saying and doing much less than I used to, even though I'm still being paid the same amount. Maybe I shouldn't complain, but the actions of big institutions do matter, and it's time to speak out against the collectivity fad that is upon us.

It's not hard to see why the fallacy of collectivism has become so popular in big organizations: If the principle is correct, then individuals should not be required to take on risks or responsibilities. We live in times of tremendous uncertainties coupled with infinite liability phobia, and we must function within institutions that are loyal to no executive, much less to any lower level member. Every individual who is afraid to say the wrong thing within his or her organization is safer when hiding behind a wiki or some other Meta aggregation ritual.

I've participated in a number of elite, well-paid wikis and Meta-surveys lately and have had a chance to observe the results. I have even been part of a wiki about wikis. What I've seen is a loss of insight and subtlety, a disregard for the nuances of considered opinions, and an increased tendency to enshrine the official or normative beliefs of an organization. Why isn't everyone screaming about the recent epidemic of inappropriate uses of the collective? It seems to me the reason is that bad old ideas look confusingly fresh when they are packaged as technology.

The collective rises around us in multifarious ways. What afflicts big institutions also afflicts pop culture. For instance, it has become notoriously difficult to introduce a new pop star in the music business. Even the most successful entrants have hardly ever made it past the first album in the last decade or so. The exception is American Idol. As with the Wikipedia, there's nothing wrong with it. The problem is its centrality.

More people appear to vote in this pop competition than in presidential elections, and one reason why is the instant convenience of information technology. The collective can vote by phone or by texting, and some vote more than once. The collective is flattered and it responds. The winners are likable, almost by definition.

But John Lennon wouldn't have won. He wouldn't have made it to the finals. Or if he had, he would have ended up a different sort of person and artist. The same could be said about Jimi Hendrix, Elvis, Joni Mitchell, Duke Ellington, David Byrne, Grandmaster Flash, Bob Dylan (please!), and almost anyone else who has been vastly influential in creating pop music.

As below, so above. The New York Times , of all places, has recently published op-ed pieces supporting the pseudo-idea of intelligent design. This is astonishing. The Times has become the paper of averaging opinions. Something is lost when American Idol becomes a leader instead of a follower of pop music. But when intelligent design shares the stage with real science in the paper of record, everything is lost.

How could the Times have fallen so far? I don't know, but I would imagine the process was similar to what I've seen in the consulting world of late. It's safer to be the aggregator of the collective. You get to include all sorts of material without committing to anything. You can be superficially interesting without having to worry about the possibility of being wrong.

Except when intelligent thought really matters. In that case the average idea can be quite wrong, and only the best ideas have lasting value. Science is like that.

The collective isn't always stupid. In some special cases the collective can be brilliant. For instance, there's a demonstrative ritual often presented to incoming students at business schools. In one version of the ritual, a large jar of jellybeans is placed in the front of a classroom. Each student guesses how many beans there are. While the guesses vary widely, the average is usually accurate to an uncanny degree.

This is an example of the special kind of intelligence offered by a collective. It is that peculiar trait that has been celebrated as the "Wisdom of Crowds," though I think the word "wisdom" is misleading. It is part of what makes Adam Smith's Invisible Hand clever, and is connected to the reasons Google's page rank algorithms work. It was long ago adapted to futurism, where it was known as the Delphi technique. The phenomenon is real, and immensely useful.

But it is not infinitely useful. The collective can be stupid, too. Witness tulip crazes and stock bubbles. Hysteria over fictitious satanic cult child abductions. Y2K mania.

The reason the collective can be valuable is precisely that its peaks of intelligence and stupidity are not the same as the ones usually displayed by individuals. Both kinds of intelligence are essential.

What makes a market work, for instance, is the marriage of collective and individual intelligence. A marketplace can't exist only on the basis of having prices determined by competition. It also needs entrepreneurs to come up with the products that are competing in the first place.

In other words, clever individuals, the heroes of the marketplace, ask the questions which are answered by collective behavior. They put the jellybeans in the jar.

There are certain types of answers that ought not be provided by an individual. When a government bureaucrat sets a price, for instance, the result is often inferior to the answer that would come from a reasonably informed collective that is reasonably free of manipulation or runaway internal resonances. But when a collective designs a product, you get design by committee, which is a derogatory expression for a reason.

Here I must take a moment to comment on Linux and similar efforts. The various formulations of "open" or "free" software are different from the Wikipedia and the race to be most Meta in important ways. Linux programmers are not anonymous and in fact personal glory is part of the motivational engine that keeps such enterprises in motion. But there are similarities, and the lack of a coherent voice or design sensibility in an esthetic sense is one negative quality of both open source software and the Wikipedia.

These movements are at their most efficient while building hidden information plumbing layers, such as Web servers. They are hopeless when it comes to producing fine user interfaces or user experiences. If the code that ran the Wikipedia user interface were as open as the contents of the entries, it would churn itself into impenetrable muck almost immediately. The collective is good at solving problems which demand results that can be evaluated by uncontroversial performance parameters, but bad when taste and judgment matter.

Collectives can be just as stupid as any individual, and in important cases, stupider. The interesting question is whether it's possible to map out where the one is smarter than the many.

There is a lot of history to this topic, and varied disciplines have lots to say. Here is a quick pass at where I think the boundary between effective collective thought and nonsense lies: The collective is more likely to be smart when it isn't defining its own questions, when the goodness of an answer can be evaluated by a simple result (such as a single numeric value,) and when the information system which informs the collective is filtered by a quality control mechanism that relies on individuals to a high degree. Under those circumstances, a collective can be smarter than a person. Break any one of those conditions and the collective becomes unreliable or worse.

Meanwhile, an individual best achieves optimal stupidity on those rare occasions when one is both given substantial powers and insulated from the results of his or her actions.

If the above criteria have any merit, then there is an unfortunate convergence. The setup for the most stupid collective is also the setup for the most stupid individuals.

Every authentic example of collective intelligence that I am aware of also shows how that collective was guided or inspired by well-meaning individuals. These people focused the collective and in some cases also corrected for some of the common hive mind failure modes. The balancing of influence between people and collectives is the heart of the design of democracies, scientific communities, and many other long-standing projects. There's a lot of experience out there to work with. A few of these old ideas provide interesting new ways to approach the question of how to best use the hive mind.

The pre-Internet world provides some great examples of how personality-based quality control can improve collective intelligence. For instance, an independent press provides tasty news about politicians by reporters with strong voices and reputations, like the Watergate reporting of Woodward and Bernstein. Other writers provide product reviews, such as Walt Mossberg in The Wall Street Journal and David Pogue in The New York Times . Such journalists inform the collective's determination of election results and pricing. Without an independent press, composed of heroic voices, the collective becomes stupid and unreliable, as has been demonstrated in many historical instances. (Recent events in America have reflected the weakening of the press, in my opinion.)

Scientific communities likewise achieve quality through a cooperative process that includes checks and balances, and ultimately rests on a foundation of goodwill and "blind" elitism — blind in the sense that ideally anyone can gain entry, but only on the basis of a meritocracy. The tenure system and many other aspects of the academy are designed to support the idea that individual scholars matter, not just the process or the collective.

Another example: Entrepreneurs aren't the only "heroes" of a marketplace. The role of a central bank in an economy is not the same as that of a communist party official in a centrally planned economy. Even though setting an interest rate sounds like the answering of a question, it is really more like the asking of a question. The Fed asks the market to answer the question of how to best optimize for lowering inflation, for instance. While that might not be the question everyone would want to have asked, it is at least coherent.

Yes, there have been plenty of scandals in government, the academy and in the press. No mechanism is perfect, but still here we are, having benefited from all of these institutions. There certainly have been plenty of bad reporters, self-deluded academic scientists, incompetent bureaucrats, and so on. Can the hive mind help keep them in check? The answer provided by experiments in the pre-Internet world is "yes," but only provided some signal processing is placed in the loop.

Some of the regulating mechanisms for collectives that have been most successful in the pre-Internet world can be understood in part as modulating the time domain. For instance, what if a collective moves too readily and quickly, jittering instead of settling down to provide a single answer? This happens on the most active Wikipedia entries, for example, and has also been seen in some speculation frenzies in open markets.

One service performed by representative democracy is low-pass filtering. Imagine the jittery shifts that would take place if a wiki were put in charge of writing laws. It's a terrifying thing to consider. Super-energized people would be struggling to shift the wording of the tax-code on a frantic, never-ending basis. The Internet would be swamped.

Such chaos can be avoided in the same way it already is, albeit imperfectly, by the slower processes of elections and court proceedings. The calming effect of orderly democracy achieves more than just the smoothing out of peripatetic struggles for consensus. It also reduces the potential for the collective to suddenly jump into an over-excited state when too many rapid changes to answers coincide in such a way that they don't cancel each other out. (Technical readers will recognize familiar principles in signal processing.)

The Wikipedia has recently slapped a crude low pass filter on the jitteriest entries, such as "President George W. Bush." There's now a limit to how often a particular person can remove someone else's text fragments. I suspect that this will eventually have to evolve into an approximate mirror of democracy as it was before the Internet arrived.

The reverse problem can also appear. The hive mind can be on the right track, but moving too slowly. Sometimes collectives would yield brilliant results given enough time but there isn't enough time. A problem like global warming would automatically be addressed eventually if the market had enough time to respond to it, for instance. Insurance rates would climb, and so on. Alas, in this case there isn't enough time, because the market conversation is slowed down by the legacy effect of existing investments. Therefore some other process has to intervene, such as politics invoked by individuals.

Another example of the slow hive problem: There was a lot of technology developed slowly in the millennia before there was a clear idea of how to be empirical, how to have a peer reviewed technical literature and an education based on it, and before there was an efficient market to determine the value of inventions. What is crucial to notice about modernity is that structure and constraints were part of what sped up the process of technological development, not just pure openness and concessions to the collective.

Let's suppose that the Wikipedia will indeed become better in some ways, as is claimed by the faithful, over a period of time. We might still need something better sooner.

Some wikitopians explicitly hope to see education subsumed by wikis. It is at least possible that in the fairly near future enough communication and education will take place through anonymous Internet aggregation that we could become vulnerable to a sudden dangerous empowering of the hive mind. History has shown us again and again that a hive mind is a cruel idiot when it runs on autopilot. Nasty hive mind outbursts have been flavored Maoist, Fascist, and religious, and these are only a small sampling. I don't see why there couldn't be future social disasters that appear suddenly under the cover of technological utopianism. If wikis are to gain any more influence they ought to be improved by mechanisms like the ones that have worked tolerably well in the pre-Internet world.

The hive mind should be thought of as a tool. Empowering the collective does not empower individuals — just the reverse is true. There can be useful feedback loops set up between individuals and the hive mind, but the hive mind is too chaotic to be fed back into itself.

These are just a few ideas about how to train a potentially dangerous collective and not let it get out of the yard. When there's a problem, you want it to bark but not bite you.

The illusion that what we already have is close to good enough, or that it is alive and will fix itself, is the most dangerous illusion of all. By avoiding that nonsense, it ought to be possible to find a humanistic and practical way to maximize value of the collective on the Web without turning ourselves into idiots. The best guiding principle is to always cherish individuals first.

(Jaron Lanier is film director. He writes a monthly column for Discover Magazine.)

2. The Rise of Crowdsourcing
Remember outsourcing? Sending jobs to India and China is so 2003. The new pool of cheap labor: everyday people using their spare cycles to create content, solve problems, even do corporate R & D.
By Jeff Howe

1. The Professional

Claudia Menashe needed pictures of sick people. A project director at the National Health Museum in Washington, DC, Menashe was putting together a series of interactive kiosks devoted to potential pandemics like the avian flu. An exhibition designer had created a plan for the kiosk itself, but now Menashe was looking for images to accompany the text. Rather than hire a photographer to take shots of people suffering from the flu, Menashe decided to use preexisting images – stock photography, as it’s known in the publishing industry.

In October 2004, she ran across a stock photo collection by Mark Harmel, a freelance photographer living in Manhattan Beach, California. Harmel, whose wife is a doctor, specializes in images related to the health care industry. “Claudia wanted people sneezing, getting immunized, that sort of thing,” recalls Harmel, a slight, soft-spoken 52-year-old.

The National Health Museum has grand plans to occupy a spot on the National Mall in Washington by 2012, but for now it’s a fledgling institution with little money. “They were on a tight budget, so I charged them my nonprofit rate,” says Harmel, who works out of a cozy but crowded office in the back of the house he shares with his wife and stepson. He offered the museum a generous discount: $100 to $150 per photograph. “That’s about half of what a corporate client would pay,” he says. Menashe was interested in about four shots, so for Harmel, this could be a sale worth $600.

After several weeks of back-and-forth, Menashe emailed Harmel to say that, regretfully, the deal was off. “I discovered a stock photo site called iStockphoto ,” she wrote, “which has images at very affordable prices.” That was an understatement. The same day, Menashe licensed 56 pictures through iStockphoto – for about $1 each.

iStockphoto, which grew out of a free image-sharing exchange used by a group of graphic designers, had undercut Harmel by more than 99 percent. How? By creating a marketplace for the work of amateur photographers – homemakers, students, engineers, dancers. There are now about 22,000 contributors to the site, which charges between $1 and $5 per basic image. (Very large, high-resolution pictures can cost up to $40.) Unlike professionals, iStockers don’t need to clear $130,000 a year from their photos just to break even; an extra $130 does just fine. “I negotiate my rate all the time,” Harmel says. “But how can I compete with a dollar?”

He can’t, of course. For Harmel, the harsh economics lesson was clear: The product Harmel offers is no longer scarce. Professional-grade cameras now cost less than $1,000. With a computer and a copy of Photoshop, even entry-level enthusiasts can create photographs rivaling those by professionals like Harmel. Add the Internet and powerful search technology, and sharing these images with the world becomes simple.

At first, the stock industry aligned itself against iStockphoto and other so-called microstock agencies like ShutterStock and Dreamstime . Then, in February, Getty Images , the largest agency by far with more than 30 percent of the global market, purchased iStockphoto for $50 million. “If someone’s going to cannibalize your business, better it be one of your other businesses,” says Getty CEO Jonathan Klein. iStockphoto’s revenue is growing by about 14 percent a month and the service is on track to license about 10 million images in 2006 – several times what Getty’s more expensive stock agencies will sell. iStockphoto’s clients now include bulk photo purchasers like IBM and United Way, as well as the small design firms once forced to go to big stock houses. “I was using Corbis and Getty, and the image fees came out of my design fees, which kept my margin low,” notes one UK designer in an email to the company. “iStockphoto’s micro-payment system has allowed me to increase my profit margin.” Welcome to the age of the crowd. Just as distributed computing projects like UC Berkeley’s SETI@home have tapped the unused processing power of millions of individual computers, so distributed labor networks are using the Internet to exploit the spare processing power of millions of human brains. The open source software movement proved that a network of passionate, geeky volunteers could write code just as well as the highly paid developers at Microsoft or Sun Microsystems. Wikipedia showed that the model could be used to create a sprawling and surprisingly comprehensive online encyclopedia. And companies like eBay and MySpace have built profitable businesses that couldn’t exist without the contributions of users.

All these companies grew up in the Internet age and were designed to take advantage of the networked world. But now the productive potential of millions of plugged-in enthusiasts is attracting the attention of old-line businesses, too. For the last decade or so, companies have been looking overseas, to India or China, for cheap labor. But now it doesn’t matter where the laborers are – they might be down the block, they might be in Indonesia – as long as they are connected to the network.

Technological advances in everything from product design software to digital video cameras are breaking down the cost barriers that once separated amateurs from professionals. Hobbyists, part-timers, and dabblers suddenly have a market for their efforts, as smart companies in industries as disparate as pharmaceuticals and television discover ways to tap the latent talent of the crowd. The labor isn’t always free, but it costs a lot less than paying traditional employees. It’s not outsourcing; it’s crowdsourcing.

It took a while for Harmel to recognize what was happening. “When the National Health Museum called, I’d never heard of iStockphoto,” he says. “But now, I see it as the first hole in the dike.” In 2000, Harmel made roughly $69,000 from a portfolio of 100 stock photographs, a tidy addition to what he earned from commissioned work. Last year his stock business generated less money – $59,000 – from more than 1,000 photos. That’s quite a bit more work for less money.

Harmel isn’t the only photographer feeling the pinch. Last summer, there was a flurry of complaints on the Stock Artists Alliance online forum. “People were noticing a significant decline in returns on their stock portfolios,” Harmel says. “I can’t point to iStockphoto and say it’s the culprit, but it has definitely put downward pressure on prices.” As a result, he has decided to shift the focus of his business to assignment work. “I just don’t see much of a future for professional stock photography,” he says.

2. The Packager

“Is that even a real horse? It looks like it doesn’t have any legs,” says Michael Hirschorn, executive vice president of original programming and production at VH1 and a creator of the cable channel’s hit show Web Junk 20 . The program features the 20 most popular videos making the rounds online in any given week. Hirschorn and the rest of the show’s staff are gathered in the artificial twilight of a VH1 editing room, reviewing their final show of the season. The horse in question is named Patches, and it’s sitting in the passenger seat of a convertible at a McDonald’s drive-through window. The driver orders a cheeseburger for Patches. “Oh, he’s definitely real,” a producer replies. “We’ve got footage of him drinking beer.” The crew breaks into laughter, and Hirschorn asks why they’re not using that footage. “Standards didn’t like it,” a producer replies. Standards – aka Standards and Practices, the people who decide whether a show violates the bounds of taste and decency – had no such problem with Elvis the Robocat or the footage of a bicycle racer being attacked by spectators and thrown violently from a bridge. Web Junk 20 brings viewers all that and more, several times a week. In the new, democratic age of entertainment by the masses, for the masses, stupid pet tricks figure prominently.

The show was the first regular program to repackage the Internet’s funniest home videos, but it won’t be the last. In February, Bravo launched a series called Outrageous and Contagious: Viral Videos , and USA Network has a similar effort in the works. The E! series The Soup has a segment called “Cybersmack,” and NBC has a pilot in development hosted by Carson Daly called Carson Daly’s Cyberhood , which will attempt to bring beer-drinking farm animals to the much larger audiences of network TV. Al Gore’s Current TV is placing the most faith in the model: More than 30 percent of its programming consists of material submitted by viewers.

Viral videos are a perfect fit for VH1, which knows how to repurpose content to make compelling TV on a budget. The channel reinvented itself in 1996 as a purveyor of tawdry nostalgia with Pop-Up Video and perfected the form six years later with I Love the 80s . “That show was a good model because it got great ratings, and we licensed the clips” – quick hits from such cultural touchstones as The A-Team and Fatal Attraction – “on the cheap,” Hirschorn says. (Full disclosure: I once worked for Hirschorn at .) But the C-list celebrity set soon caught on to VH1’s searing brand of ridicule. “It started to get more difficult to license the clips,” says Hirschorn, who has the manner of a laid-back English professor. “And we’re spending more money now to get them, as our ratings have improved.”

But Hirschorn knew of a source for even more affordable clips. He had been watching the growth of video on the Internet and figured there had to be a way to build a show around it. “I knew we offered something YouTube couldn’t: television,” he says. “Everyone wants to be on TV.” At about the same time, VH1’s parent company, Viacom, purchased iFilm – a popular repository of video clips – for $49 million. Just like that, Hirschorn had access to a massive supply of viral videos. And because iFilm already ranks videos by popularity, the service came with an infrastructure for separating the gold from the god-awful. The model’s most winning quality, as Hirschorn readily admits, is that it’s “incredibly cheap” – cheaper by far than anything else VH1 produces, which is to say, cheaper than almost anything else on television. A single 30-minute episode costs somewhere in the mid-five figures – about a tenth of what the channel pays to produce so noTORIous, a scripted comedy featuring Tori Spelling that premiered in April. And if the model works on a network show like Carson Daly’s Cyberhood, the savings will be much greater: The average half hour of network TV comedy now costs nearly $1 million to produce.

Web Junk 20 premiered in January, and ratings quickly exceeded even Hirschorn’s expectations. In its first season, the show is averaging a respectable half-million viewers in the desirable 18-to-49 age group, which Hirschorn says is up more than 40 percent from the same Friday-night time slot last year. The numbers helped persuade the network to bring Web Junk 20 back for another season.

Hirschorn thinks the crowd will be a crucial component of TV 2.0. “I can imagine a time when all of our shows will have a user-generated component,” he says. The channel recently launched Air to the Throne , an online air guitar contest, in which viewers serve as both talent pool and jury. The winners will be featured during the VH1 Rock Honors show premiering May 31. Even VH1’s anchor program, Best Week Ever , is including clips created by viewers.

But can the crowd produce enough content to support an array of shows over many years? It’s something Brian Graden, president of entertainment for MTV Music Networks Group, is concerned about. “We decided not to do 52 weeks a year of Web Junk , because we don’t want to burn the thing,” he says. Rather than relying exclusively on the supply of viral clips, Hirschorn has experimented with soliciting viewers to create videos expressly for Web Junk 20 . Early results have been mixed. Viewers sent in nearly 12,000 videos for the Show Us Your Junk contest. “The response rate was fantastic,” says Hirschorn as he and other staffers sit in the editing room. But, he adds, “almost all of them were complete crap.”

Choosing the winners, in other words, was not so difficult. “We had about 20 finalists.” But Hirschorn remains confident that as user-generated TV matures, the users will become more proficient and the networks better at ferreting out the best of the best. The sheer force of consumer behavior is on his side. Late last year the Pew Internet & American Life Project released a study revealing that 57 percent of 12- to 17-year-olds online – 12 million individuals – are creating content of some sort and posting it to the Web. “Even if the signal-to-noise ratio never improves – which I think it will, by the way – that’s an awful lot of good material,” Hirschorn says. “I’m confident that in the end, individual pieces will fail but the model will succeed.”

3. The Tinkerer

The future of corporate R&D can be found above Kelly’s Auto Body on Shanty Bay Road in Barrie, Ontario. This is where Ed Melcarek, 57, keeps his “weekend crash pad,” a one-bedroom apartment littered with amplifiers, a guitar, electrical transducers, two desktop computers, a trumpet, half of a pontoon boat, and enough electric gizmos to stock a RadioShack. On most Saturdays, Melcarek comes in, pours himself a St. Remy, lights a Player cigarette, and attacks problems that have stumped some of the best corporate scientists at Fortune 100 companies.

Not everyone in the crowd wants to make silly videos. Some have the kind of scientific talent and expertise that corporate America is now finding a way to tap. In the process, forward-thinking companies are changing the face of R&D. Exit the white lab coats; enter Melcarek – one of over 90,000 “solvers” who make up the network of scientists on InnoCentive, the research world’s version of iStockphoto.

Pharmaceutical maker Eli Lilly funded InnoCentive ’s launch in 2001 as a way to connect with brainpower outside the company – people who could help develop drugs and speed them to market. From the outset, InnoCentive threw open the doors to other firms eager to access the network’s trove of ad hoc experts. Companies like Boeing, DuPont, and Procter & Gamble now post their most ornery scientific problems on InnoCentive’s Web site; anyone on InnoCentive’s network can take a shot at cracking them.

The companies – or seekers, in InnoCentive parlance – pay solvers anywhere from $10,000 to $100,000 per solution. (They also pay InnoCentive a fee to participate.) Jill Panetta, InnoCentive’s chief scientific officer, says more than 30 percent of the problems posted on the site have been cracked, “which is 30 percent more than would have been solved using a traditional, in-house approach.”

The solvers are not who you might expect. Many are hobbyists working from their proverbial garage, like the University of Dallas undergrad who came up with a chemical to use inart restoration, or the Cary, North Carolina, patent lawyer who devised a novel way to mix large batches of chemical compounds.

This shouldn’t be surprising, notes Karim Lakhani, a lecturer in technology and innovation at MIT, who has studied InnoCentive. “The strength of a network like InnoCentive’s is exactly the diversity of intellectual background,” he says. Lakhani and his three coauthors surveyed 166 problems posted to InnoCentive from 26 different firms. “We actually found the odds of a solver’s success increased in fields in which they had no formal expertise,” Lakhani says. He has put his finger on a central tenet of network theory, what pioneering sociologist Mark Granovetter describes as “the strength of weak ties.” The most efficient networks are those that link to the broadest range of information, knowledge, and experience.

Which helps explain how Melcarek solved a problem that stumped the in-house researchers at Colgate-Palmolive. The giant packaged goods company needed a way to inject fluoride powder into a toothpaste tube without it dispersing into the surrounding air. Melcarek knew he had a solution by the time he’d finished reading the challenge: Impart an electric charge to the powder while grounding the tube. The positively charged fluoride particles would be attracted to the tube without any significant dispersion.

“It was really a very simple solution,” says Melcarek. Why hadn’t Colgate thought of it? “They’re probably test tube guys without any training in physics.” Melcarek earned $25,000 for his efforts. Paying Colgate-Palmolive’s R&D staff to produce the same solution could have cost several times that amount – if they even solved it at all. Melcarek says he was elated to win. “These are rocket-science challenges,” he says. “It really reinforced my confidence in what I can do.”

Melcarek, who favors thick sweaters and a floppy fishing hat, has charted an unconventional course through the sciences. He spent four years earning his master’s degree at the world-class particle accelerator in Vancouver, British Columbia, but decided against pursuing a PhD. “I had an offer from the private sector,” he says, then pauses. “I really needed the money.” A succession of “unsatisfying” engineering jobs followed, none of which fully exploited Melcarek’s scientific training or his need to tinker. “I’m not at my best in a 9-to-5 environment,” he says. Working sporadically, he has designed products like heating vents and industrial spray-painting robots. Not every quick and curious intellect can land a plum research post at a university or privately funded lab. Some must make HVAC systems.

For Melcarek, InnoCentive has been a ticket out of this scientific backwater. For the past three years, he has logged onto the network’s Web site a few times a week to look at new problems, called challenges. They are categorized as either chemistry or biology problems. Melcarek has formal training in neither discipline, but he quickly realized this didn’t hinder him when it came to chemistry. “I saw that a lot of the chemistry challenges could be solved using electromechanical processes I was familiar with from particle physics,” he says. “If I don’t know what to do after 30 minutes of brainstorming, I give up.” Besides the fluoride injection challenge, Melcarek also successfully came up with a method for purifying silicone-based solvents. That challenge paid $10,000. Other Melcarek solutions have been close runners-up, and he currently has two more up for consideration. “Not bad for a few weeks’ work,” he says with a chuckle.

It’s also not a bad deal for the companies that can turn to the crowd to help curb the rising cost of corporate research. “Everyone I talk to is facing a similar issue in regards to R&D,” says Larry Huston, Procter & Gamble’s vice president of innovation and knowledge. “Every year research budgets increase at a faster rate than sales. The current R&D model is broken.”

Huston has presided over a remarkable about-face at P&G, a company whose corporate culture was once so insular it became known as “the Kremlin on the Ohio.” By 2000, the company’s research costs were climbing, while sales remained flat. The stock price fell by more than half, and Huston led an effort to reinvent the way the company came up with new products. Rather than cut P&G’s sizable in-house R&D department (which currently employs 9,000 people), he decided to change the way they worked.
Seeing that the company’s most successful products were a result of collaboration between different divisions, Huston figured that even more cross-pollination would be a good thing. Meanwhile, P&G had set a goal of increasing the number of innovations acquired from outside its walls from 15 percent to 50 percent. Six years later, critical components of more than 35 percent of the company’s initiatives were generated outside P&G. As a result, Huston says, R&D productivity is up 60 percent, and the stock has returned to five-year highs. “It has changed how we define the organ-ization,” he says. “We have 9,000 people on our R&D staff and up to 1.5 million researchers working through our external networks. The line between the two is hard to draw.”

P&G is one of InnoCentive’s earliest and best customers, but the company works with other crowdsourcing networks as well. YourEncore , for example, allows companies to find and hire retired scientists for one-off assignments. NineSigma is an online marketplace for innovations, matching seeker companies with solvers in a marketplace similar to InnoCentive. “People mistake this for outsourcing, which it most definitely is not,” Huston says. “Outsourcing is when I hire someone to perform a service and they do it and that’s the end of the relationship. That’s not much different from the way employment has worked throughout the ages. We’re talking about bringing people in from outside and involving them in this broadly creative, collaborative process. That’s a whole new paradigm.”

4. The Masses

In the late 1760s, a Hungarian nobleman named Wolfgang von Kempelen built the first machine capable of beating a human at chess. Called the Turk, von Kempelen’s automaton consisted of a small wooden cabinet, a chessboard, and the torso of a turbaned mannequin. The Turk toured Europe to great acclaim, even besting such luminaries as Benjamin Franklin and Napoleon. It was, of course, a hoax. The cabinet hid a flesh-and-blood chess master. The Turk was a fancy-looking piece of technology that was really powered by human intelligence. Which explains why has named its new crowdsourcing engine after von Kempelen’s contraption. Amazon Mechanical Turk is a Web-based marketplace that helps companies find people to perform tasks computers are generally lousy at – identifying items in a photograph, skimming real estate documents to find identifying information, writing short product descriptions, transcribing podcasts. Amazon calls the tasks HITs (human intelligence tasks); they’re designed to require very little time, and consequently they offer very little compensation – most from a few cents to a few dollars.

InnoCentive and iStockphoto are labor markets for specialized talents, but just about anyone possessing basic literacy can find something to do on Mechanical Turk. It’s crowdsourcing for the masses. So far, the program has a mixed track record: After an initial burst of activity, the amount of work available from requesters – companies offering work on the site – has dropped significantly. “It’s gotten a little gimpy,” says Alan Hatcher, founder of Turker Nation , a community forum. “No one’s come up with the killer app yet.” And not all of the Turkers are human: Some would-be workers use software as a shortcut to complete the tasks, but the quality suffers. “I think half of the people signed up are trying to pull a scam,” says one requester who asked not to be identified. “There really needs to be a way to kick people off the island.”

Peter Cohen, the program’s director, acknowledges that Mechanical Turk, launched in beta in November, is a work in progress. (Amazon refuses to give a date for its official launch.) “This is a very new idea, and it’s going to take some time for people to wrap their heads around it,” Cohen says. “We’re at the tippy-top of the iceberg.”

A few companies, however, are already taking full advantage of the Turkers. Sunny Gupta runs a software company called iConclude just outside Seattle. The firm creates programs that streamline tech support tasks for large companies, like Alaska Airlines. The basic unit of iConclude’s product is the repair flow, a set of steps a tech support worker should take to resolve a problem.

Most problems that iConclude’s software addresses aren’t complicated or time-consuming, Gupta explains. But only people with experience in Java and Microsoft systems have the knowledge required to write these repair flows. Finding and hiring them is a big and expensive challenge. “We had been outsourcing the writing of our repair flows to a firm in Boise, Idaho,” he says from a small office overlooking a Tully’s Coffee. “We were paying $2,000 for each one.”

As soon as Gupta heard about Mechanical Turk, he suspected he could use it to find people with the sort of tech support background he needed. After a couple of test runs, iConclude was able to identify about 80 qualified Turkers, all of whom were eager to work on iConclude’s HITs. “Two of them had quit their jobs to raise their kids,” Gupta says. “They might have been making six figures in their previous lives, but now they were happy just to put their skills to some use.”

Gupta turns his laptop around to show me a flowchart on his screen. “This is what we were paying $2,000 for. But this one,” he says, “was authored by one of our Turkers.” I ask how much he paid. His answer: “Five dollars.”

(Contributing editor Jeff Howe ( ) wrote about MySpace in issue 13.11. To read more about crowdsourcing, please visit Jeff Howe’s blog).

1. From Caught up in the 'Net
How the Internet has quietly changed our lives

Socially, locally and globally - nothing else has had such a huge influence over the population -- Caleb Chung, UGOBE
"Within a few years, the Internet will turn business upside down. Be prepared -- or die."

So stated The Economist in 1999, a few months before Internet stocks crashed, and the web boom went spectacularly bust.

The Internet has always been associated with hype -- it's prophets proclaimed it as a bold techno wonderland that would change the world.

During the early dotcom boom days, investors fired up by a hysterical media invested millions of dollars at brake neck speed in companies so wobbly and hastily put together they could never last more than a few months. In the end, most didn't.

But now we've all had a few years to live with the Internet, The Economist magazine and many other's predictions seem much more reasonable - possibly even understated. The Web does seem to be genuinely changing the world, and those companies and organizations not ready for it are dying as predicted.

Perhaps without the flash and fuss and revolutionary haste we were promised in those early, heady days, but quietly and effortlessly the Internet has become an integral part of our everyday lives: at home and at work.

The shakeout of weak tech companies in 2000/2001 is now seen as symptom common to most technological revolutions, and marking the point from which the strong companies would grow to increasingly dominate both the marketplace -- and our lives.

"Socially, locally and globally - nothing else has had such a huge influence over the population," says Caleb Chung, co-founder and chief inventor at tech company UGOBE.

"It's multiplied my productivity by a factor of about five," admits philosopher Daniel Dennet. Think how many times you check the web for news, or look for magazine articles. (You're reading this online, after all.) Or how often you read your email every day. When did you start doing that?

Think about how much online shopping you do now. Think about how strong online brands, such as Amazon, Google or AOL, suddenly are. Think about how normal it seems to be able to access the web on a mobile device, or hook up to Wi-Fi in a café or on a train. All of these developments have happened in the last few years.

"[The Internet has] had such a huge impact that I am currently unable to remember how it was to work without it," says anthropologist Daniela Cerqui.

Almost all businesses now use the web to communicate with their customers, order stock and manage accounts. Financial services -- especially the insurance industry -- have blossomed online. Retailers no-longer see the web as a threat and businesses aren't still divided into on- and off-line enterprises, with most high street stores competing in both realms. Telcos are launching online telephone services. Even the music, TV and film industries, which until recently saw the web as threat number one, are beginning to gingerly embrace downloads.

National newspapers are finding a global audience through their Web sites, and Podcasting is blurring the line between print and broadcast media. Blogs are challenging professional journalism and shepherding the news agenda, and readers/viewers are increasingly becoming part of the news by providing their own video clips, pictures and observations to the public through the wealth of interactive features offered by many broadcasters online.

As the marketplace mutates, so consumers have benefited: primarily from an explosion of choice, but also because online bargain hunting is now so easy -- and product comparison sites so ubiquitous -- that prices have been forced down, and business has to offer a better service to stay competitive. Plus being able to shop from your desktop has saved us all a lot of time and made essential purchases far more convenient.

But as well as changing the way we access our banks, news, music, film -- products and services of all kinds -- the Web is genuinely revolutionizing the way we communicate with each other.

"The Internet has had an enormous effect on my work and on my life," says John Searle. "I am able to communicate nearly instantly with people all over the world and the access to information [it provides] enables me to find out what I need to know much more rapidly and efficiently than I ever could by going to a library."

"I can access as much research on any topic as my brain can handle in a given time period," says "robot psychologist" Joanne Pransky. "I am able to work anywhere in the world as long as I have Internet access. I can also play bridge, my favorite pastime, anytime, anywhere, on-line with my favorite partner, my elderly mother who lives 3,000 miles from me."

But perhaps one of the greatest surprises has not been how much the Web has changed our ability to communicate, but how our desire to communicate has actively driven change on the web. Social sites -- like with its 50 million plus members -- have risen from nowhere to form a key part of the social network of many people.

These are genuinely original concepts, without precedent in the offline world, and mark a new direction where innovation is outstripping our existing understanding of what we want -- or where the Web can take us socially and culturally.

"Without question the ability to communicate, share data, develop projects jointly, network has magnified the human mind has changed everything," says Peter Diamandis. "The Internet is the nervous system of a new developing 'meta-intelligence'."

But these rapid changes to something as fundamental a part of our humanity as the way we communicate are not without their concerns.

"As an anthropologist, I stand back to try to understand what all this means," says Cerqui. "What I see is that human beings are flexible and able to get used to almost everything. And once we are used to a new kind of technology, we become unable to live without it. But this does not necessarily means that it is an improvement."

"From a direct social interaction perspective, the Internet has had a negative impact on me," says Pransky. "It may be several days before I have the need to leave my house and have contact with the "outside" world. It has been awhile since I have worked in a group situation. After we have dinner, my family and I often retire to our own individual, confined use of the Internet."

"Technology is not the solution to all problems," says Cerqui. "Social problems ought to be faced with social solutions. For instance, you do not solve the problem of people feeling lonely by connecting them to the Internet, but you contribute in establishing a new kind of society where face-to-face interaction is less important than mediated communication."

Where historically the extent of our social contact was limited to our immediate community, and our information sources restricted by practical, physical limits of conversation and the printed page -- suddenly now we are all at the hub of a personal network wired up to the entire world. It can, at times, seem all too much.

"The pace, the vast wealth of information coming from all directions -- how the heck can you keep up when it comes at you like this? Yes, it seems too much to assimilate," says Mark Reed, Yale Professor of Engineering and Applied Science.

But Reed argues that however confusing and overwhelming the web seems now, overcoming this helplessness will be the next stage in our social evolution.

"Watch a child play a computer game or surf the Internet -- truly child's play. How parochial of us to assume that just because our imprinted minds can't keep up, that fresh new minds won't be able to either. We are amazingly adaptable, which is why we survived -- our progeny will not only adapt, they will excel.

"However, we need to give them the right tools. We need to teach them to think critically and objectively -- teach them to grasp scientific methodology and embrace technological literacy.

"Unfortunately our society does a very poor job of this. The future of the human race is too important to leave to politicians and corporations. A scientifically educated global population will help us focus on the truly important problems, such as energy -- arguably the most important crisis we as a species will face -- instead of wasting efforts on petty squabbles for short term economic and political gain."

However we teach our children and however our society evolves, many scientists already have a clear vision of the way technology is leading us. 'Singularity,' the fusion of human, machine and the communication capacity of the web may enable a spectacular and fundamental shift in our understanding of human consciousness.

"I am still a big believer in Artificial Intelligence; new software 'shells' that surround us as individuals and becomes our interface with the outside world," says Diamandi. "These interfaces will allow us to communicate with individuals and machines more efficiently.

"The Internet will merge into these software shells, serving as a global nervous system interconnecting people to people in the way single cell life-forms grouped into multi-cellular organisms and eventually into an organism as complex as the human body."

3. A New Open-Source Politics
Just as Linux lets users design their own operating systems, so 'netroots' politicos may redesign our nominating system.
By Jonathan Alter

June 5, 2006 issue - Bob Schieffer of CBS News made a good point on "The Charlie Rose Show" last week. He said that successful presidents have all skillfully exploited the dominant medium of their times. The Founders were eloquent writers in the age of pamphleteering. Franklin D. Roosevelt restored hope in 1933 by mastering radio. And John F. Kennedy was the first president elected because of his understanding of television.

Will 2008 bring the first Internet president? Last time, Howard Dean and later John Kerry showed that the whole idea of "early money" is now obsolete in presidential politics. The Internet lets candidates who catch fire raise millions in small donations practically overnight. That's why all the talk of Hillary Clinton's "war chest" making her the front runner for 2008 is the most hackneyed punditry around. Money from wealthy donors remains the essential ingredient in most state and local campaigns, but "free media" shapes the outcome of presidential races, and the Internet is the freest media of all.

No one knows exactly where technology is taking politics, but we're beginning to see some clues. For starters, the longtime stranglehold of media consultants may be over. In 2004, Errol Morris, the director of "The Thin Blue Line" and "The Fog of War," on his own initiative made several brilliant anti-Bush ads (they featured lifelong Republicans explaining why they were voting for Kerry). Not only did Kerry not air the ads, he told me recently he never even knew they existed. In 2008, any presidential candidate with half a brain will let a thousand ad ideas bloom (or stream) online and televise only those that are popular downloads. Deferring to "the wisdom of crowds" will be cheaper and more effective.

Open-source politics has its hazards, starting with the fact that most people over 35 will need some help with the concept. But just as Linux lets tech-savvy users avoid Microsoft and design their own operating systems, so "netroots" political organizers may succeed in redesigning our current nominating system. But there probably won't be much that's organized about it. By definition, the Internet strips big shots of their control of the process, which is a good thing. Politics is at its most invigorating when it's cacophonous and chaotic.

To begin busting up the dumb system we have for selecting presidents, a bipartisan group will open shop this week at This Internet-based third party is spearheaded by three veterans of the antique 1976 campaign: Democrats Hamilton Jordan and Gerald Rafshoon helped get Jimmy Carter elected; Republican Doug Bailey did media for Gerald Ford before launching the political TIP SHEET Hotline. They are joined by the independent former governor of Maine, Angus King, and a collection of idealistic young people who are also tired of a nominating process that pulls the major party candidates to the extremes. Their hope: to get even a fraction of the 50 million who voted for the next American Idol to nominate a third-party candidate for president online and use this new army to get him or her on the ballot in all 50 states. The idea is to go viral—or die. "The worst thing that could happen would be for a bunch of old white guys like us to run this," Jordan says.

The Unity08 plan is for an online third-party convention in mid-2008, following the early primaries. Any registered voter could be a delegate; their identities would be confirmed by cross-referencing with voter registration rolls (which would also prevent people from casting more than one ballot). That would likely include a much larger number than the few thousand primary voters who all but nominate the major party candidates in Iowa and New Hampshire. This virtual process will vote on a centrist platform and nominate a bipartisan ticket. The idea is that even if the third-party nominee didn't win, he would wield serious power in the '08 election, which will likely be close.

There are plenty of ways for this process to prove meaningless, starting with the major parties deciding to nominate independent-minded candidates like John McCain (OK, the old McCain) or Mark Warner. Third-party efforts have usually been candidate-driven, and the centrist names tossed around by way of example (Chuck Hagel, Sam Nunn, Tom Kean) don't have much marquee value in the blogosphere. And the organizers would have to design safeguards to keep the whole thing from being hijacked.

But funny things happen in election years. With an issue as eye-glazing as the deficit, a wacky, jug-eared Texan named Ross Perot received 19 percent of the vote in 1992 and 7 percent in 1996. He did it with "Larry King Live" and an 800 number. In a country where more than 40 percent of voters now self-identify as independents, it's no longer a question of whether the Internet will revolutionize American politics, but when.

(For more, go to

4. Beyond the Open-Source Hype
By Caroline Benner

Across the globe, politicians are embracing open-source software with grand pronouncements and great expectations. Although they are correct to identify potential benefits, software is far more complicated than their talking points, and it may disappoint those with outsized hopes.

Governments around the world are enchanted by open-source software. Unlike proprietary software, for which the code is kept secret, the open-source variety can be copied, modified, and shared. In 2003, Brazil, for instance, announced plans to move 80 percent of its state computers to the open-source operating system Linux. In 2003, Taiwan launched a “National Open Source Plan” to build a software industry that could replace the proprietary software in government and education. In France, the ministries of Defence, Culture, and Economy all use open-source operating systems. Japan, China, and South Korea are working together on an open-source alternative to Microsoft’s Windows operating system.

Those who believe open source is superior to proprietary software often tout its economic and strategic benefits. For example, they believe that the total cost of ownership of open-source software is lower than that of proprietary software because they avoid the expensive licensing fees that companies like Microsoft charge. In 2002, Finland estimated that it could save 26 million euros a year by having state agencies switch to Linux. Open-source advocates also believe the software has technical advantages over proprietary. It is more secure than its proprietary counterpart, they say, because the open-source development process produces better software.

And governments tend not to like software they can’t audit for trapdoors that would allow an outsider access. Many countries also argue that open source is better than proprietary software at adapting to local needs, because you can change the behavior of the program by changing its code. With tiny budgets to spend on foreign-produced information technology and little infrastructure to create their own, open source looks like an attractive way for poor nations to gain access to the information age.

Trouble is, the benefits of open source are not always so clear-cut. Software is too complicated a creation to be captured in rhetoric, and assertions about some of the technical benefits of open source fail to tell the whole story.

Consider the issue of security. In a 2002 letter to Microsoft, Peruvian Congressman Edgar David Villanueva Núñez noted that, “Relative to the security of the software itself, it is well known that all software (whether proprietary or free) contains ‘errors’ or ‘bugs’ (in programmers’ slang). But it is also well-known that the bugs in free software are fewer.” Yet, ask computer security experts and they’ll tell you that’s not necessarily true. Software, with its millions of lines of code, is so complicated that experts don’t know for sure that open source has fewer bugs, nor can they say with certainty that having fewer bugs makes open source more secure. “There are really two reasons that it is very difficult to know whether software is secure,” says Stanford University computer scientist Alex Aiken. “The first reason is that even the simplest software program consists of hundreds of thousands to millions of parts, and potentially all of these have to be correct, or the system may have security vulnerabilities. The second reason is that we have no technology for systematically checking that the parts are correct and fit together in a way that ensures security.”

The Chinese have a preference for open source because they distrust software that cannot be audited, a concern that became especially acute after the discovery of the phrase “_NSAKEY” (thought to refer to the National Security Agency) in the code of Microsoft’s Windows software in 1999. But auditing any source code in order to ensure there are no security vulnerabilities is nigh on impossible. Figuring that governments nevertheless prefer seeing source code to not seeing it, Microsoft has sought to allay worries over trapdoors by allowing governments to peruse its code.

Politicians, meanwhile, enjoy the notion that open source can be adapted by their people to better address local needs. Open source “has the potential of empowering people in ways proprietary software does not allow. It offers users the choice to … customize the software,” South Africa concluded in its 2003 proposed strategy on the use of open source in its government. It’s true that access to source code offers the most flexibility in making a piece of software behave differently. For instance, when a piece of software is not offered in a particular language, programmers can alter open-source code to translate it. However, it is misleading to say that open source empowers people in ways proprietary software does not. Both open source and proprietary software allow you to change the behavior of a software program in significant ways without touching the program’s source code. The truth is that software authors, whether they work for a large software firm or no one at all, want users to adapt their product to specific locations and needs. Microsoft makes a living out of making its software customizable while still closely guarding its source code.

Furthermore, software is so complex that serious source code manipulation and maintenance is a high-cost endeavor, not a job one can plunge right into. It is a task for a large group of highly skilled people with a lot of time—people who, more and more, are being funded by deep pockets, including, yes, U.S. technology corporations such as IBM and Hewlett Packard. Software becomes more interesting—indeed, rhetoric-worthy—when it promises a better future. Open source may well deliver that promise, but computer science is too young a discipline, and there is too much we do not yet know about software to be so sure. Governments may be wise to choose open source. They just shouldn’t count on it to do much more than what software does best: process the data of the information age.

(Caroline Benner is a fellow at the University of Washington’s Institute for International Policy. From 2001 to 2003, Ms. Benner was a consultant with the geopolitical policy and strategy group at Microsoft.)

5. Open source ubuntu -- by Becky Hogge
Despite its world-saving image, open-source software has not made much real revolution. But Becky Hogge finds hope in new software "for human beings", designed to bridge the digital divide.

"You know when ubuntu is there, and it is obvious when it is absent. It has to do with what it means to be truly human, to know that you are bound up with others in the bundle of life." – Archbishop Desmond Tutu, God Has A Dream

I have lost count of the many seminars, conferences and talks I've attended recently where that magic phrase "the-open-source-operating-system-Linux" has resounded. Whether uttered by a New Labour policy wonk or a Polish art historian it has the same effect: the crowd of academics, bloggers and civil servants goes gooey, basking smugly in the image of thousands of bearded geeks quietly subverting the capitalist beast from the comfort of their bedrooms, chewing caffeine gum and trotting out code that rivals Microsoft's.

But it's a bit more complicated than that. Linux comes in all kinds of different flavours, called distributions or "distros", and with each distro the code-base, licensing terms, support model, philosophy and community or company organisation varies. Understand these differences, and the utopian prism through which the average non-geek views open source software shifts.

Most non-programmers only talk about Linux, they don't run it. Scanning any of these pseudo-tech conferences, I'll find most delegates are liveblogging on Apple Mac laptops, the nice guy alternative to Microsoft. But this non-programmer did run Linux, thanks in large part to my wonderful, if slightly militant, live-in system administrator. Not only did I run Linux, I ran the Debian distribution of Linux – a distro so pure in the eyes of most geeks, I attracted admiration from all who knew.

Debian is a community-based distro, and of all Linux distributions, conforms most to the idealist stereotype of Linux. Debian holds elections for project leaders and its code is generally considered to be the most reliable, secure code available. But Debian made me cry. Away from the admiring glances of my fellow tech-commentators, if my live-in system admin had gone out for the evening and I wanted to install a printer, listen to online radio, or upload holiday photos, you would more often than not find me a good forty-five minutes later, staring blankly at the screen with watery eyes, or languishing in a puddle of my own ineptitude playing solitaire (which is much, much better on Linux).

Technology is no good if people can't use it. And by people I don't just mean me (please, reader, dry your eyes for my techno-foolishness). I mean those in the developing world, priced out of running the latest Microsoft or Apple software. Linux offers a real opportunity for developing nations – not only because it is free to own but also because it can run on hardware that the developed world would otherwise send to landfills. But if you need a degree in computer science to use it, then barriers to access are just as real.

Enter Ubuntu Linux. Founded in 2004 by South African Mark Shuttleworth, its goal is "Linux for human beings" – a usable Linux for desktop computing by non-geeks. Ubuntu is a traditional African concept describing the humanising quality of people's relationships with one another – "I am because we are" – which fits very nicely with concepts of sharing and open source. The Ubuntu distribution is based largely on Debian code, but with a strong focus on usability, and with predictable support and release patterns. And it works – Ubuntu has been the most popular distro of Linux since 2005, and since I made the switch last year those tearful evenings in front of the computer screen have become a distant memory.

Shuttleworth, the self-styled "first African in space" and an early dotcom boomer, ploughed profits from the 1999 sale of his website security firm VeriSign into the Shuttleworth Foundation, a non-profit organisation that supports education and social innovation in Africa. The Shuttleworth Foundation has funded some excellent projects – most notably the freedom toaster, a free vending-machine of open source software and other digital material. The freedom toaster solves another perennial problem associated with open source software in the developing world: most open source applications need to be downloaded from the internet. This is bad news for anybody without access to a fast broadband connection (which includes most of Africa). But all the freedom toaster requires is an electrical socket for it happily to churn out custom-made CDs of Linux distros, open source applications and Wikipedia collections.

The newest mission for the Shuttleworth Foundation is to design a ten-year curriculum in computer programming to be used in South African schools, and it has attracted some big guns in coding. Rumour has it that Guido Van Rossum, project leader, or as the geeks would have it, "benevolent dictator for life" on object-orientated programming language Python, is hoping to work with Alan Kay, a founding thinker behind object-orientated programming (which proponents claim is easier to pick up for coding newbies) and co-developer of the hundred-dollar laptop, on an educational programming environment to contribute to the mission.

But Ubuntu itself isn't part of the Shuttleworth foundation – it's supported by Canonical Ltd, a for-profit company owned by Shuttleworth. Unlike Debian, which is run by a fairly democratically organised community, Ubuntu programmers are often hired and paid by Shuttleworth. And on 1 June, Ubuntu will release its first "enterprise" distro – Ubuntu Dapper Drake. The enterprise edition will come with longer support terms, which will make it attractive to corporations. Indeed Sun Microsystems, who made a reputation out of belittling Linux in favour of their Unix-based operating system Solaris were making very positive noises about working with Ubuntu at a recent conference in San Francisco.

So maybe Ubuntu wasn't such an altruistic endeavour after all. But is the fact Shuttleworth might make a profit bad news for the thousands of people downloading the desktop version every day? It's hard to say. Ubuntu could quite easily stay true to its first goal of Linux for human beings, providing usable free software for non-geeks in the developing and developed world while also making a tidy profit from support contracts with commercial companies on the side. On the face of it, this looks like the perfect social enterprise. To achieve it, however, Shuttleworth will have to keep the Debian community on side, since – beyond the usability question – they still provide most of the code behind the Ubuntu distro.

Without Shuttleworth's entrepreneurial flair, would such a popular, usable version of Linux have been possible? We'll never know, but it is fair to say that the Debian community weren't known for their interest in the capabilities of those less technically-literate than them. Whatever happens to Ubuntu, it is clear that there is a lot more to that "open-source-operating-system-Linux" than first meets the eye.