Deep Thoughts: the internet, is it a stupid hive mind, or the potential savior of humankind?
The hive mind is for the most part stupid and boring. Why pay attention to it?
The problem is in the way the Wikipedia has come to be regarded and used; how it's been elevated to such importance so quickly. And that is part of the larger pattern of the appeal of a new online collectivism that is nothing less than a resurgence of the idea that the collective is all-wise, that it is desirable to have influence concentrated in a bottleneck that can channel the collective with the most verity and force. This is different from representative democracy, or meritocracy. This idea has had dreadful consequences when thrust upon us from the extreme Right or the extreme Left in various historical periods. The fact that it's now being re-introduced today by prominent technologists and futurists, people who in many cases I know and like, doesn't make it any less dangerous.
DIGITAL MAOISM:
The Hazards of the New Online Collectivism
By Jaron Lanier
An Edge Original Essay
Introduction
In "Digital Maosim", an original essay written for Edge, computer scientist and digital visionary Jaron Lanier finds fault with what he terms the new online collectivism. He cites as an example the Wikipedia, noting that "reading a Wikipedia entry is like reading the bible closely. There are faint traces of the voices of various anonymous authors and editors, though it is impossible to be sure".
His problem is not with the unfolding experiment of the Wikipedia itself, but "the way the Wikipedia has come to be regarded and used; how it's been elevated to such importance so quickly. And that is part of the larger pattern of the appeal of a new online collectivism that is nothing less than a resurgence of the idea that the collective is all-wise, that it is desirable to have influence concentrated in a bottleneck that can channel the collective with the most verity and force. This is different from representative democracy, or meritocracy. This idea has had dreadful consequences when thrust upon us from the extreme Right or the extreme Left in various historical periods. The fact that it's now being re-introduced today by prominent technologists and futurists, people who in many cases I know and like, doesn't make it any less dangerous".
And he notes that "the Wikipedia is far from being the only online fetish site for foolish collectivism. There's a frantic race taking place online to become the most "Meta" site, to be the highest level aggregator, subsuming the identity of all other sites".
Where is this leading? Lanier calls attention to the "so-called 'Artificial Intelligence' and the race to erase personality and be most Meta. In each case, there's a presumption that something like a distinct kin to individual human intelligence is either about to appear any minute, or has already appeared. The problem with that presumption is that people are all too willing to lower standards in order to make the purported newcomer appear smart. Just as people are willing to bend over backwards and make themselves stupid in order to make an AI interface appear smart (as happens when someone can interact with the notorious Microsoft paper clip,) so are they willing to become uncritical and dim in order to make Meta-aggregator sites appear to be coherent."
Read on as Jaron Lanier throwns a lit Molotov cocktail down towards Palo Alto from up in the Berkeley Hills...
—JB
DIGITAL MAOISM
(JARON LANIER:) My Wikipedia entry identifies me (at least this week) as a film director. It is true I made one experimental short film about a decade and a half ago. The concept was awful: I tried to imagine what Maya Deren would have done with morphing. It was shown once at a film festival and was never distributed and I would be most comfortable if no one ever sees it again.
In the real world it is easy to not direct films. I have attempted to retire from directing films in the alternative universe that is the Wikipedia a number of times, but somebody always overrules me. Every time my Wikipedia entry is corrected, within a day I'm turned into a film director again. I can think of no more suitable punishment than making these determined Wikipedia goblins actually watch my one small old movie.
Twice in the past several weeks, reporters have asked me about my filmmaking career. The fantasies of the goblins have entered that portion of the world that is attempting to remain real. I know I've gotten off easy. The errors in my Wikipedia bio have been (at least prior to the publication of this article) charming and even flattering.
Reading a Wikipedia entry is like reading the bible closely. There are faint traces of the voices of various anonymous authors and editors, though it is impossible to be sure. In my particular case, it appears that the goblins are probably members or descendants of the rather sweet old Mondo 2000 culture linking psychedelic experimentation with computers. They seem to place great importance on relating my ideas to those of the psychedelic luminaries of old (and in ways that I happen to find sloppy and incorrect.) Edits deviating from this set of odd ideas that are important to this one particular small subculture are immediately removed. This makes sense. Who else would volunteer to pay that much attention and do all that work?
The problem I am concerned with here is not the Wikipedia in itself. It's been criticized quite a lot, especially in the last year, but the Wikipedia is just one experiment that still has room to change and grow. At the very least it's a success at revealing what the online people with the most determination and time on their hands are thinking, and that's actually interesting information.
No, the problem is in the way the Wikipedia has come to be regarded and used; how it's been elevated to such importance so quickly. And that is part of the larger pattern of the appeal of a new online collectivism that is nothing less than a resurgence of the idea that the collective is all-wise, that it is desirable to have influence concentrated in a bottleneck that can channel the collective with the most verity and force. This is different from representative democracy, or meritocracy. This idea has had dreadful consequences when thrust upon us from the extreme Right or the extreme Left in various historical periods. The fact that it's now being re-introduced today by prominent technologists and futurists, people who in many cases I know and like, doesn't make it any less dangerous.
There was a well-publicized study in Nature last year comparing the accuracy of the Wikipedia to Encyclopedia Britannica. The results were a toss up. While there is a lingering debate about the validity of the study. The items selected for the comparison were just the sort that Wikipedia would do well on: Science topics that the collective at large doesn't care much about. "Kinetic isotope effect" or "Vesalius, Andreas" are examples of topics that make the Britannica hard to maintain, because it takes work to find the right authors to research and review a multitude of diverse topics. But they are perfect for the Wikipedia. There is little controversy around these items, plus the Net provides ready access to a reasonably small number of competent specialist graduate student types possessing the manic motivation of youth.
A core belief of the wiki world is that whatever problems exist in the wiki will be incrementally corrected as the process unfolds. This is analogous to the claims of Hyper-Libertarians who put infinite faith in a free market, or the Hyper-Lefties who are somehow able to sit through consensus decision-making processes. In all these cases, it seems to me that empirical evidence has yielded mixed results. Sometimes loosely structured collective activities yield continuous improvements and sometimes they don't. Often we don't live long enough to find out. Later in this essay I'll point out what constraints make a collective smart. But first, it's important to not lose sight of values just because the question of whether a collective can be smart is so fascinating. Accuracy in a text is not enough. A desirable text is more than a collection of accurate references. It is also an expression of personality.
For instance, most of the technical or scientific information that is in the Wikipedia was already on the Web before the Wikipedia was started. You could always use Google or other search services to find information about items that are now wikified. In some cases I have noticed specific texts get cloned from original sites at universities or labs onto wiki pages. And when that happens, each text loses part of its value. Since search engines are now more likely to point you to the wikified versions, the Web has lost some of its flavor in casual use.
When you see the context in which something was written and you know who the author was beyond just a name, you learn so much more than when you find the same text placed in the anonymous, faux-authoritative, anti-contextual brew of the Wikipedia. The question isn't just one of authentication and accountability, though those are important, but something more subtle. A voice should be sensed as a whole. You have to have a chance to sense personality in order for language to have its full meaning. Personal Web pages do that, as do journals and books. Even Britannica has an editorial voice, which some people have criticized as being vaguely too "Dead White Men."
If an ironic Web site devoted to destroying cinema claimed that I was a filmmaker, it would suddenly make sense. That would be an authentic piece of text. But placed out of context in the Wikipedia, it becomes drivel.
Myspace is another recent experiment that has become even more influential than the Wikipedia. Like the Wikipedia, it adds just a little to the powers already present on the Web in order to inspire a dramatic shift in use. Myspace is all about authorship, but it doesn't pretend to be all-wise. You can always tell at least a little about the character of the person who made a Myspace page. But it is very rare indeed that a Myspace page inspires even the slightest confidence that the author is a trustworthy authority. Hurray for Myspace on that count!
Myspace is a richer, multi-layered, source of information than the Wikipedia, although the topics the two services cover barely overlap. If you want to research a TV show in terms of what people think of it, Myspace will reveal more to you than the analogous and enormous entries in the Wikipedia.
The Wikipedia is far from being the only online fetish site for foolish collectivism. There's a frantic race taking place online to become the most "Meta" site, to be the highest level aggregator, subsuming the identity of all other sites.
The race began innocently enough with the notion of creating directories of online destinations, such as the early incarnations of Yahoo. Then came AltaVista, where one could search using an inverted database of the content of the whole Web. Then came Google, which added page rank algorithms. Then came the blogs, which varied greatly in terms of quality and importance. This lead to Meta-blogs such as Boing Boing, run by identified humans, which served to aggregate blogs. In all of these formulations, real people were still in charge. An individual or individuals were presenting a personality and taking responsibility.
These Web-based designs assumed that value would flow from people. It was still clear, in all such designs, that the Web was made of people, and that ultimately value always came from connecting with real humans.
Even Google by itself (as it stands today) isn't Meta enough to be a problem. One layer of page ranking is hardly a threat to authorship, but an accumulation of many layers can create a meaningless murk, and that is another matter.
In the last year or two the trend has been to remove the scent of people, so as to come as close as possible to simulating the appearance of content emerging out of the Web as if it were speaking to us as a supernatural oracle. This is where the use of the Internet crosses the line into delusion.
Kevin Kelly, the former editor of Whole Earth Review and the founding Executive Editor of Wired , is a friend and someone who has been thinking about what he and others call the "Hive Mind." He runs a Website called Cool Tools that's a cross between a blog and the old Whole Earth Catalog . On Cool Tools , the contributors, including me, are not a hive because we are identified.
In March, Kelly reviewed a variety of "Consensus Web filters" such as "Digg" and "Reddit" that assemble material every day from all the myriad of other aggregating sites. Such sites intend to be more Meta than the sites they aggregate. There is no person taking responsibility for what appears on them, only an algorithm. The hope seems to be that the most Meta site will become the mother of all bottlenecks and receive infinite funding.
That new magnitude of Meta-ness lasted only a month. In April, Kelly reviewed a site called "popurls" that aggregates consensus Web filtering sites...and there was a new "most Meta". We now are reading what a collectivity algorithm derives from what other collectivity algorithms derived from what collectives chose from what a population of mostly amateur writers wrote anonymously.
Is "popurls" any good? I am writing this on May 27, 2006. In the last few days an experimental approach to diabetes management has been announced that might prevent nerve damage. That's huge news for tens of millions of Americans. It is not mentioned on popurls. Popurls does clue us in to this news: "Student sets simultaneous world ice cream-eating record, worst ever ice cream headache." Mainstream news sources all lead today with a serious earthquake in Java. Popurls includes a few mentions of the event, but they are buried within the aggregation of aggregate news sites like Google News. The reason the quake appears on popurls at all can be discovered only if you dig through all the aggregating layers to find the original sources, which are those rare entries actually created by professional writers and editors who sign their names. But at the layer of popurls, the ice cream story and the Javanese earthquake are at best equals, without context or authorship.
Kevin Kelly says of the "popurls" site, "There's no better way to watch the hive mind." But the hive mind is for the most part stupid and boring. Why pay attention to it?
Readers of my previous rants will notice a parallel between my discomfort with so-called "Artificial Intelligence" and the race to erase personality and be most Meta. In each case, there's a presumption that something like a distinct kin to individual human intelligence is either about to appear any minute, or has already appeared. The problem with that presumption is that people are all too willing to lower standards in order to make the purported newcomer appear smart. Just as people are willing to bend over backwards and make themselves stupid in order to make an AI interface appear smart (as happens when someone can interact with the notorious Microsoft paper clip,) so are they willing to become uncritical and dim in order to make Meta-aggregator sites appear to be coherent.
There is a pedagogical connection between the culture of Artificial Intelligence and the strange allure of anonymous collectivism online. Google's vast servers and the Wikipedia are both mentioned frequently as being the startup memory for Artificial Intelligences to come. Larry Page is quoted via a link presented to me by popurls this morning (who knows if it's accurate) as speculating that an AI might appear within Google within a few years. George Dyson has wondered if such an entity already exists on the Net, perhaps perched within Google. My point here is not to argue about the existence of Metaphysical entities, but just to emphasize how premature and dangerous it is to lower the expectations we hold for individual human intellects.
The beauty of the Internet is that it connects people. The value is in the other people. If we start to believe the Internet itself is an entity that has something to say, we're devaluing those people and making ourselves into idiots.
Compounding the problem is that new business models for people who think and write have not appeared as quickly as we all hoped. Newspapers, for instance, are on the whole facing a grim decline as the Internet takes over the feeding of the curious eyes that hover over morning coffee and, even worse, classified ads. In the new environment, Google News is for the moment better funded and enjoys a more secure future than most of the rather small number of fine reporters around the world who ultimately create most of its content. The aggregator is richer than the aggregated.
The question of new business models for content creators on the Internet is a profound and difficult topic in itself, but it must at least be pointed out that writing professionally and well takes time and that most authors need to be paid to take that time. In this regard, blogging is not writing. For example, it's easy to be loved as a blogger. All you have to do is play to the crowd. Or you can flame the crowd to get attention. Nothing is wrong with either of those activities. What I think of as real writing, however, writing meant to last, is something else. It involves articulating a perspective that is not just reactive to yesterday's moves in a conversation.
The artificial elevation of all things Meta is not confined to online culture. It is having a profound influence on how decisions are made in America.
What we are witnessing today is the alarming rise of the fallacy of the infallible collective. Numerous elite organizations have been swept off their feet by the idea. They are inspired by the rise of the Wikipedia, by the wealth of Google, and by the rush of entrepreneurs to be the most Meta. Government agencies, top corporate planning departments, and major universities have all gotten the bug.
As a consultant, I used to be asked to test an idea or propose a new one to solve a problem. In the last couple of years I've often been asked to work quite differently. You might find me and the other consultants filling out survey forms or tweaking edits to a collective essay. I'm saying and doing much less than I used to, even though I'm still being paid the same amount. Maybe I shouldn't complain, but the actions of big institutions do matter, and it's time to speak out against the collectivity fad that is upon us.
It's not hard to see why the fallacy of collectivism has become so popular in big organizations: If the principle is correct, then individuals should not be required to take on risks or responsibilities. We live in times of tremendous uncertainties coupled with infinite liability phobia, and we must function within institutions that are loyal to no executive, much less to any lower level member. Every individual who is afraid to say the wrong thing within his or her organization is safer when hiding behind a wiki or some other Meta aggregation ritual.
I've participated in a number of elite, well-paid wikis and Meta-surveys lately and have had a chance to observe the results. I have even been part of a wiki about wikis. What I've seen is a loss of insight and subtlety, a disregard for the nuances of considered opinions, and an increased tendency to enshrine the official or normative beliefs of an organization. Why isn't everyone screaming about the recent epidemic of inappropriate uses of the collective? It seems to me the reason is that bad old ideas look confusingly fresh when they are packaged as technology.
The collective rises around us in multifarious ways. What afflicts big institutions also afflicts pop culture. For instance, it has become notoriously difficult to introduce a new pop star in the music business. Even the most successful entrants have hardly ever made it past the first album in the last decade or so. The exception is American Idol. As with the Wikipedia, there's nothing wrong with it. The problem is its centrality.
More people appear to vote in this pop competition than in presidential elections, and one reason why is the instant convenience of information technology. The collective can vote by phone or by texting, and some vote more than once. The collective is flattered and it responds. The winners are likable, almost by definition.
But John Lennon wouldn't have won. He wouldn't have made it to the finals. Or if he had, he would have ended up a different sort of person and artist. The same could be said about Jimi Hendrix, Elvis, Joni Mitchell, Duke Ellington, David Byrne, Grandmaster Flash, Bob Dylan (please!), and almost anyone else who has been vastly influential in creating pop music.
As below, so above. The New York Times , of all places, has recently published op-ed pieces supporting the pseudo-idea of intelligent design. This is astonishing. The Times has become the paper of averaging opinions. Something is lost when American Idol becomes a leader instead of a follower of pop music. But when intelligent design shares the stage with real science in the paper of record, everything is lost.
How could the Times have fallen so far? I don't know, but I would imagine the process was similar to what I've seen in the consulting world of late. It's safer to be the aggregator of the collective. You get to include all sorts of material without committing to anything. You can be superficially interesting without having to worry about the possibility of being wrong.
Except when intelligent thought really matters. In that case the average idea can be quite wrong, and only the best ideas have lasting value. Science is like that.
The collective isn't always stupid. In some special cases the collective can be brilliant. For instance, there's a demonstrative ritual often presented to incoming students at business schools. In one version of the ritual, a large jar of jellybeans is placed in the front of a classroom. Each student guesses how many beans there are. While the guesses vary widely, the average is usually accurate to an uncanny degree.
This is an example of the special kind of intelligence offered by a collective. It is that peculiar trait that has been celebrated as the "Wisdom of Crowds," though I think the word "wisdom" is misleading. It is part of what makes Adam Smith's Invisible Hand clever, and is connected to the reasons Google's page rank algorithms work. It was long ago adapted to futurism, where it was known as the Delphi technique. The phenomenon is real, and immensely useful.
But it is not infinitely useful. The collective can be stupid, too. Witness tulip crazes and stock bubbles. Hysteria over fictitious satanic cult child abductions. Y2K mania.
The reason the collective can be valuable is precisely that its peaks of intelligence and stupidity are not the same as the ones usually displayed by individuals. Both kinds of intelligence are essential.
What makes a market work, for instance, is the marriage of collective and individual intelligence. A marketplace can't exist only on the basis of having prices determined by competition. It also needs entrepreneurs to come up with the products that are competing in the first place.
In other words, clever individuals, the heroes of the marketplace, ask the questions which are answered by collective behavior. They put the jellybeans in the jar.
There are certain types of answers that ought not be provided by an individual. When a government bureaucrat sets a price, for instance, the result is often inferior to the answer that would come from a reasonably informed collective that is reasonably free of manipulation or runaway internal resonances. But when a collective designs a product, you get design by committee, which is a derogatory expression for a reason.
Here I must take a moment to comment on Linux and similar efforts. The various formulations of "open" or "free" software are different from the Wikipedia and the race to be most Meta in important ways. Linux programmers are not anonymous and in fact personal glory is part of the motivational engine that keeps such enterprises in motion. But there are similarities, and the lack of a coherent voice or design sensibility in an esthetic sense is one negative quality of both open source software and the Wikipedia.
These movements are at their most efficient while building hidden information plumbing layers, such as Web servers. They are hopeless when it comes to producing fine user interfaces or user experiences. If the code that ran the Wikipedia user interface were as open as the contents of the entries, it would churn itself into impenetrable muck almost immediately. The collective is good at solving problems which demand results that can be evaluated by uncontroversial performance parameters, but bad when taste and judgment matter.
Collectives can be just as stupid as any individual, and in important cases, stupider. The interesting question is whether it's possible to map out where the one is smarter than the many.
There is a lot of history to this topic, and varied disciplines have lots to say. Here is a quick pass at where I think the boundary between effective collective thought and nonsense lies: The collective is more likely to be smart when it isn't defining its own questions, when the goodness of an answer can be evaluated by a simple result (such as a single numeric value,) and when the information system which informs the collective is filtered by a quality control mechanism that relies on individuals to a high degree. Under those circumstances, a collective can be smarter than a person. Break any one of those conditions and the collective becomes unreliable or worse.
Meanwhile, an individual best achieves optimal stupidity on those rare occasions when one is both given substantial powers and insulated from the results of his or her actions.
If the above criteria have any merit, then there is an unfortunate convergence. The setup for the most stupid collective is also the setup for the most stupid individuals.
Every authentic example of collective intelligence that I am aware of also shows how that collective was guided or inspired by well-meaning individuals. These people focused the collective and in some cases also corrected for some of the common hive mind failure modes. The balancing of influence between people and collectives is the heart of the design of democracies, scientific communities, and many other long-standing projects. There's a lot of experience out there to work with. A few of these old ideas provide interesting new ways to approach the question of how to best use the hive mind.
The pre-Internet world provides some great examples of how personality-based quality control can improve collective intelligence. For instance, an independent press provides tasty news about politicians by reporters with strong voices and reputations, like the Watergate reporting of Woodward and Bernstein. Other writers provide product reviews, such as Walt Mossberg in The Wall Street Journal and David Pogue in The New York Times . Such journalists inform the collective's determination of election results and pricing. Without an independent press, composed of heroic voices, the collective becomes stupid and unreliable, as has been demonstrated in many historical instances. (Recent events in America have reflected the weakening of the press, in my opinion.)
Scientific communities likewise achieve quality through a cooperative process that includes checks and balances, and ultimately rests on a foundation of goodwill and "blind" elitism — blind in the sense that ideally anyone can gain entry, but only on the basis of a meritocracy. The tenure system and many other aspects of the academy are designed to support the idea that individual scholars matter, not just the process or the collective.
Another example: Entrepreneurs aren't the only "heroes" of a marketplace. The role of a central bank in an economy is not the same as that of a communist party official in a centrally planned economy. Even though setting an interest rate sounds like the answering of a question, it is really more like the asking of a question. The Fed asks the market to answer the question of how to best optimize for lowering inflation, for instance. While that might not be the question everyone would want to have asked, it is at least coherent.
Yes, there have been plenty of scandals in government, the academy and in the press. No mechanism is perfect, but still here we are, having benefited from all of these institutions. There certainly have been plenty of bad reporters, self-deluded academic scientists, incompetent bureaucrats, and so on. Can the hive mind help keep them in check? The answer provided by experiments in the pre-Internet world is "yes," but only provided some signal processing is placed in the loop.
Some of the regulating mechanisms for collectives that have been most successful in the pre-Internet world can be understood in part as modulating the time domain. For instance, what if a collective moves too readily and quickly, jittering instead of settling down to provide a single answer? This happens on the most active Wikipedia entries, for example, and has also been seen in some speculation frenzies in open markets.
One service performed by representative democracy is low-pass filtering. Imagine the jittery shifts that would take place if a wiki were put in charge of writing laws. It's a terrifying thing to consider. Super-energized people would be struggling to shift the wording of the tax-code on a frantic, never-ending basis. The Internet would be swamped.
Such chaos can be avoided in the same way it already is, albeit imperfectly, by the slower processes of elections and court proceedings. The calming effect of orderly democracy achieves more than just the smoothing out of peripatetic struggles for consensus. It also reduces the potential for the collective to suddenly jump into an over-excited state when too many rapid changes to answers coincide in such a way that they don't cancel each other out. (Technical readers will recognize familiar principles in signal processing.)
The Wikipedia has recently slapped a crude low pass filter on the jitteriest entries, such as "President George W. Bush." There's now a limit to how often a particular person can remove someone else's text fragments. I suspect that this will eventually have to evolve into an approximate mirror of democracy as it was before the Internet arrived.
The reverse problem can also appear. The hive mind can be on the right track, but moving too slowly. Sometimes collectives would yield brilliant results given enough time but there isn't enough time. A problem like global warming would automatically be addressed eventually if the market had enough time to respond to it, for instance. Insurance rates would climb, and so on. Alas, in this case there isn't enough time, because the market conversation is slowed down by the legacy effect of existing investments. Therefore some other process has to intervene, such as politics invoked by individuals.
Another example of the slow hive problem: There was a lot of technology developed slowly in the millennia before there was a clear idea of how to be empirical, how to have a peer reviewed technical literature and an education based on it, and before there was an efficient market to determine the value of inventions. What is crucial to notice about modernity is that structure and constraints were part of what sped up the process of technological development, not just pure openness and concessions to the collective.
Let's suppose that the Wikipedia will indeed become better in some ways, as is claimed by the faithful, over a period of time. We might still need something better sooner.
Some wikitopians explicitly hope to see education subsumed by wikis. It is at least possible that in the fairly near future enough communication and education will take place through anonymous Internet aggregation that we could become vulnerable to a sudden dangerous empowering of the hive mind. History has shown us again and again that a hive mind is a cruel idiot when it runs on autopilot. Nasty hive mind outbursts have been flavored Maoist, Fascist, and religious, and these are only a small sampling. I don't see why there couldn't be future social disasters that appear suddenly under the cover of technological utopianism. If wikis are to gain any more influence they ought to be improved by mechanisms like the ones that have worked tolerably well in the pre-Internet world.
The hive mind should be thought of as a tool. Empowering the collective does not empower individuals — just the reverse is true. There can be useful feedback loops set up between individuals and the hive mind, but the hive mind is too chaotic to be fed back into itself.
These are just a few ideas about how to train a potentially dangerous collective and not let it get out of the yard. When there's a problem, you want it to bark but not bite you.
The illusion that what we already have is close to good enough, or that it is alive and will fix itself, is the most dangerous illusion of all. By avoiding that nonsense, it ought to be possible to find a humanistic and practical way to maximize value of the collective on the Web without turning ourselves into idiots. The best guiding principle is to always cherish individuals first.
(Jaron Lanier is film director. He writes a monthly column for Discover Magazine.)
2. The Rise of Crowdsourcing
Remember outsourcing? Sending jobs to India and China is so 2003. The new pool of cheap labor: everyday people using their spare cycles to create content, solve problems, even do corporate R & D.
By Jeff Howe
1. The Professional
Claudia Menashe needed pictures of sick people. A project director at the National Health Museum in Washington, DC, Menashe was putting together a series of interactive kiosks devoted to potential pandemics like the avian flu. An exhibition designer had created a plan for the kiosk itself, but now Menashe was looking for images to accompany the text. Rather than hire a photographer to take shots of people suffering from the flu, Menashe decided to use preexisting images – stock photography, as it’s known in the publishing industry.
In October 2004, she ran across a stock photo collection by Mark Harmel, a freelance photographer living in Manhattan Beach, California. Harmel, whose wife is a doctor, specializes in images related to the health care industry. “Claudia wanted people sneezing, getting immunized, that sort of thing,” recalls Harmel, a slight, soft-spoken 52-year-old.
The National Health Museum has grand plans to occupy a spot on the National Mall in Washington by 2012, but for now it’s a fledgling institution with little money. “They were on a tight budget, so I charged them my nonprofit rate,” says Harmel, who works out of a cozy but crowded office in the back of the house he shares with his wife and stepson. He offered the museum a generous discount: $100 to $150 per photograph. “That’s about half of what a corporate client would pay,” he says. Menashe was interested in about four shots, so for Harmel, this could be a sale worth $600.
After several weeks of back-and-forth, Menashe emailed Harmel to say that, regretfully, the deal was off. “I discovered a stock photo site called iStockphoto ,” she wrote, “which has images at very affordable prices.” That was an understatement. The same day, Menashe licensed 56 pictures through iStockphoto – for about $1 each.
iStockphoto, which grew out of a free image-sharing exchange used by a group of graphic designers, had undercut Harmel by more than 99 percent. How? By creating a marketplace for the work of amateur photographers – homemakers, students, engineers, dancers. There are now about 22,000 contributors to the site, which charges between $1 and $5 per basic image. (Very large, high-resolution pictures can cost up to $40.) Unlike professionals, iStockers don’t need to clear $130,000 a year from their photos just to break even; an extra $130 does just fine. “I negotiate my rate all the time,” Harmel says. “But how can I compete with a dollar?”
He can’t, of course. For Harmel, the harsh economics lesson was clear: The product Harmel offers is no longer scarce. Professional-grade cameras now cost less than $1,000. With a computer and a copy of Photoshop, even entry-level enthusiasts can create photographs rivaling those by professionals like Harmel. Add the Internet and powerful search technology, and sharing these images with the world becomes simple.
At first, the stock industry aligned itself against iStockphoto and other so-called microstock agencies like ShutterStock and Dreamstime . Then, in February, Getty Images , the largest agency by far with more than 30 percent of the global market, purchased iStockphoto for $50 million. “If someone’s going to cannibalize your business, better it be one of your other businesses,” says Getty CEO Jonathan Klein. iStockphoto’s revenue is growing by about 14 percent a month and the service is on track to license about 10 million images in 2006 – several times what Getty’s more expensive stock agencies will sell. iStockphoto’s clients now include bulk photo purchasers like IBM and United Way, as well as the small design firms once forced to go to big stock houses. “I was using Corbis and Getty, and the image fees came out of my design fees, which kept my margin low,” notes one UK designer in an email to the company. “iStockphoto’s micro-payment system has allowed me to increase my profit margin.” Welcome to the age of the crowd. Just as distributed computing projects like UC Berkeley’s SETI@home have tapped the unused processing power of millions of individual computers, so distributed labor networks are using the Internet to exploit the spare processing power of millions of human brains. The open source software movement proved that a network of passionate, geeky volunteers could write code just as well as the highly paid developers at Microsoft or Sun Microsystems. Wikipedia showed that the model could be used to create a sprawling and surprisingly comprehensive online encyclopedia. And companies like eBay and MySpace have built profitable businesses that couldn’t exist without the contributions of users.
All these companies grew up in the Internet age and were designed to take advantage of the networked world. But now the productive potential of millions of plugged-in enthusiasts is attracting the attention of old-line businesses, too. For the last decade or so, companies have been looking overseas, to India or China, for cheap labor. But now it doesn’t matter where the laborers are – they might be down the block, they might be in Indonesia – as long as they are connected to the network.
Technological advances in everything from product design software to digital video cameras are breaking down the cost barriers that once separated amateurs from professionals. Hobbyists, part-timers, and dabblers suddenly have a market for their efforts, as smart companies in industries as disparate as pharmaceuticals and television discover ways to tap the latent talent of the crowd. The labor isn’t always free, but it costs a lot less than paying traditional employees. It’s not outsourcing; it’s crowdsourcing.
It took a while for Harmel to recognize what was happening. “When the National Health Museum called, I’d never heard of iStockphoto,” he says. “But now, I see it as the first hole in the dike.” In 2000, Harmel made roughly $69,000 from a portfolio of 100 stock photographs, a tidy addition to what he earned from commissioned work. Last year his stock business generated less money – $59,000 – from more than 1,000 photos. That’s quite a bit more work for less money.
Harmel isn’t the only photographer feeling the pinch. Last summer, there was a flurry of complaints on the Stock Artists Alliance online forum. “People were noticing a significant decline in returns on their stock portfolios,” Harmel says. “I can’t point to iStockphoto and say it’s the culprit, but it has definitely put downward pressure on prices.” As a result, he has decided to shift the focus of his business to assignment work. “I just don’t see much of a future for professional stock photography,” he says.
2. The Packager
“Is that even a real horse? It looks like it doesn’t have any legs,” says Michael Hirschorn, executive vice president of original programming and production at VH1 and a creator of the cable channel’s hit show Web Junk 20 . The program features the 20 most popular videos making the rounds online in any given week. Hirschorn and the rest of the show’s staff are gathered in the artificial twilight of a VH1 editing room, reviewing their final show of the season. The horse in question is named Patches, and it’s sitting in the passenger seat of a convertible at a McDonald’s drive-through window. The driver orders a cheeseburger for Patches. “Oh, he’s definitely real,” a producer replies. “We’ve got footage of him drinking beer.” The crew breaks into laughter, and Hirschorn asks why they’re not using that footage. “Standards didn’t like it,” a producer replies. Standards – aka Standards and Practices, the people who decide whether a show violates the bounds of taste and decency – had no such problem with Elvis the Robocat or the footage of a bicycle racer being attacked by spectators and thrown violently from a bridge. Web Junk 20 brings viewers all that and more, several times a week. In the new, democratic age of entertainment by the masses, for the masses, stupid pet tricks figure prominently.
The show was the first regular program to repackage the Internet’s funniest home videos, but it won’t be the last. In February, Bravo launched a series called Outrageous and Contagious: Viral Videos , and USA Network has a similar effort in the works. The E! series The Soup has a segment called “Cybersmack,” and NBC has a pilot in development hosted by Carson Daly called Carson Daly’s Cyberhood , which will attempt to bring beer-drinking farm animals to the much larger audiences of network TV. Al Gore’s Current TV is placing the most faith in the model: More than 30 percent of its programming consists of material submitted by viewers.
Viral videos are a perfect fit for VH1, which knows how to repurpose content to make compelling TV on a budget. The channel reinvented itself in 1996 as a purveyor of tawdry nostalgia with Pop-Up Video and perfected the form six years later with I Love the 80s . “That show was a good model because it got great ratings, and we licensed the clips” – quick hits from such cultural touchstones as The A-Team and Fatal Attraction – “on the cheap,” Hirschorn says. (Full disclosure: I once worked for Hirschorn at Inside.com .) But the C-list celebrity set soon caught on to VH1’s searing brand of ridicule. “It started to get more difficult to license the clips,” says Hirschorn, who has the manner of a laid-back English professor. “And we’re spending more money now to get them, as our ratings have improved.”
But Hirschorn knew of a source for even more affordable clips. He had been watching the growth of video on the Internet and figured there had to be a way to build a show around it. “I knew we offered something YouTube couldn’t: television,” he says. “Everyone wants to be on TV.” At about the same time, VH1’s parent company, Viacom, purchased iFilm – a popular repository of video clips – for $49 million. Just like that, Hirschorn had access to a massive supply of viral videos. And because iFilm already ranks videos by popularity, the service came with an infrastructure for separating the gold from the god-awful. The model’s most winning quality, as Hirschorn readily admits, is that it’s “incredibly cheap” – cheaper by far than anything else VH1 produces, which is to say, cheaper than almost anything else on television. A single 30-minute episode costs somewhere in the mid-five figures – about a tenth of what the channel pays to produce so noTORIous, a scripted comedy featuring Tori Spelling that premiered in April. And if the model works on a network show like Carson Daly’s Cyberhood, the savings will be much greater: The average half hour of network TV comedy now costs nearly $1 million to produce.
Web Junk 20 premiered in January, and ratings quickly exceeded even Hirschorn’s expectations. In its first season, the show is averaging a respectable half-million viewers in the desirable 18-to-49 age group, which Hirschorn says is up more than 40 percent from the same Friday-night time slot last year. The numbers helped persuade the network to bring Web Junk 20 back for another season.
Hirschorn thinks the crowd will be a crucial component of TV 2.0. “I can imagine a time when all of our shows will have a user-generated component,” he says. The channel recently launched Air to the Throne , an online air guitar contest, in which viewers serve as both talent pool and jury. The winners will be featured during the VH1 Rock Honors show premiering May 31. Even VH1’s anchor program, Best Week Ever , is including clips created by viewers.
But can the crowd produce enough content to support an array of shows over many years? It’s something Brian Graden, president of entertainment for MTV Music Networks Group, is concerned about. “We decided not to do 52 weeks a year of Web Junk , because we don’t want to burn the thing,” he says. Rather than relying exclusively on the supply of viral clips, Hirschorn has experimented with soliciting viewers to create videos expressly for Web Junk 20 . Early results have been mixed. Viewers sent in nearly 12,000 videos for the Show Us Your Junk contest. “The response rate was fantastic,” says Hirschorn as he and other staffers sit in the editing room. But, he adds, “almost all of them were complete crap.”
Choosing the winners, in other words, was not so difficult. “We had about 20 finalists.” But Hirschorn remains confident that as user-generated TV matures, the users will become more proficient and the networks better at ferreting out the best of the best. The sheer force of consumer behavior is on his side. Late last year the Pew Internet & American Life Project released a study revealing that 57 percent of 12- to 17-year-olds online – 12 million individuals – are creating content of some sort and posting it to the Web. “Even if the signal-to-noise ratio never improves – which I think it will, by the way – that’s an awful lot of good material,” Hirschorn says. “I’m confident that in the end, individual pieces will fail but the model will succeed.”
3. The Tinkerer
The future of corporate R&D can be found above Kelly’s Auto Body on Shanty Bay Road in Barrie, Ontario. This is where Ed Melcarek, 57, keeps his “weekend crash pad,” a one-bedroom apartment littered with amplifiers, a guitar, electrical transducers, two desktop computers, a trumpet, half of a pontoon boat, and enough electric gizmos to stock a RadioShack. On most Saturdays, Melcarek comes in, pours himself a St. Remy, lights a Player cigarette, and attacks problems that have stumped some of the best corporate scientists at Fortune 100 companies.
Not everyone in the crowd wants to make silly videos. Some have the kind of scientific talent and expertise that corporate America is now finding a way to tap. In the process, forward-thinking companies are changing the face of R&D. Exit the white lab coats; enter Melcarek – one of over 90,000 “solvers” who make up the network of scientists on InnoCentive, the research world’s version of iStockphoto.
Pharmaceutical maker Eli Lilly funded InnoCentive ’s launch in 2001 as a way to connect with brainpower outside the company – people who could help develop drugs and speed them to market. From the outset, InnoCentive threw open the doors to other firms eager to access the network’s trove of ad hoc experts. Companies like Boeing, DuPont, and Procter & Gamble now post their most ornery scientific problems on InnoCentive’s Web site; anyone on InnoCentive’s network can take a shot at cracking them.
The companies – or seekers, in InnoCentive parlance – pay solvers anywhere from $10,000 to $100,000 per solution. (They also pay InnoCentive a fee to participate.) Jill Panetta, InnoCentive’s chief scientific officer, says more than 30 percent of the problems posted on the site have been cracked, “which is 30 percent more than would have been solved using a traditional, in-house approach.”
The solvers are not who you might expect. Many are hobbyists working from their proverbial garage, like the University of Dallas undergrad who came up with a chemical to use inart restoration, or the Cary, North Carolina, patent lawyer who devised a novel way to mix large batches of chemical compounds.
This shouldn’t be surprising, notes Karim Lakhani, a lecturer in technology and innovation at MIT, who has studied InnoCentive. “The strength of a network like InnoCentive’s is exactly the diversity of intellectual background,” he says. Lakhani and his three coauthors surveyed 166 problems posted to InnoCentive from 26 different firms. “We actually found the odds of a solver’s success increased in fields in which they had no formal expertise,” Lakhani says. He has put his finger on a central tenet of network theory, what pioneering sociologist Mark Granovetter describes as “the strength of weak ties.” The most efficient networks are those that link to the broadest range of information, knowledge, and experience.
Which helps explain how Melcarek solved a problem that stumped the in-house researchers at Colgate-Palmolive. The giant packaged goods company needed a way to inject fluoride powder into a toothpaste tube without it dispersing into the surrounding air. Melcarek knew he had a solution by the time he’d finished reading the challenge: Impart an electric charge to the powder while grounding the tube. The positively charged fluoride particles would be attracted to the tube without any significant dispersion.
“It was really a very simple solution,” says Melcarek. Why hadn’t Colgate thought of it? “They’re probably test tube guys without any training in physics.” Melcarek earned $25,000 for his efforts. Paying Colgate-Palmolive’s R&D staff to produce the same solution could have cost several times that amount – if they even solved it at all. Melcarek says he was elated to win. “These are rocket-science challenges,” he says. “It really reinforced my confidence in what I can do.”
Melcarek, who favors thick sweaters and a floppy fishing hat, has charted an unconventional course through the sciences. He spent four years earning his master’s degree at the world-class particle accelerator in Vancouver, British Columbia, but decided against pursuing a PhD. “I had an offer from the private sector,” he says, then pauses. “I really needed the money.” A succession of “unsatisfying” engineering jobs followed, none of which fully exploited Melcarek’s scientific training or his need to tinker. “I’m not at my best in a 9-to-5 environment,” he says. Working sporadically, he has designed products like heating vents and industrial spray-painting robots. Not every quick and curious intellect can land a plum research post at a university or privately funded lab. Some must make HVAC systems.
For Melcarek, InnoCentive has been a ticket out of this scientific backwater. For the past three years, he has logged onto the network’s Web site a few times a week to look at new problems, called challenges. They are categorized as either chemistry or biology problems. Melcarek has formal training in neither discipline, but he quickly realized this didn’t hinder him when it came to chemistry. “I saw that a lot of the chemistry challenges could be solved using electromechanical processes I was familiar with from particle physics,” he says. “If I don’t know what to do after 30 minutes of brainstorming, I give up.” Besides the fluoride injection challenge, Melcarek also successfully came up with a method for purifying silicone-based solvents. That challenge paid $10,000. Other Melcarek solutions have been close runners-up, and he currently has two more up for consideration. “Not bad for a few weeks’ work,” he says with a chuckle.
It’s also not a bad deal for the companies that can turn to the crowd to help curb the rising cost of corporate research. “Everyone I talk to is facing a similar issue in regards to R&D,” says Larry Huston, Procter & Gamble’s vice president of innovation and knowledge. “Every year research budgets increase at a faster rate than sales. The current R&D model is broken.”
Huston has presided over a remarkable about-face at P&G, a company whose corporate culture was once so insular it became known as “the Kremlin on the Ohio.” By 2000, the company’s research costs were climbing, while sales remained flat. The stock price fell by more than half, and Huston led an effort to reinvent the way the company came up with new products. Rather than cut P&G’s sizable in-house R&D department (which currently employs 9,000 people), he decided to change the way they worked.
Seeing that the company’s most successful products were a result of collaboration between different divisions, Huston figured that even more cross-pollination would be a good thing. Meanwhile, P&G had set a goal of increasing the number of innovations acquired from outside its walls from 15 percent to 50 percent. Six years later, critical components of more than 35 percent of the company’s initiatives were generated outside P&G. As a result, Huston says, R&D productivity is up 60 percent, and the stock has returned to five-year highs. “It has changed how we define the organ-ization,” he says. “We have 9,000 people on our R&D staff and up to 1.5 million researchers working through our external networks. The line between the two is hard to draw.”
P&G is one of InnoCentive’s earliest and best customers, but the company works with other crowdsourcing networks as well. YourEncore , for example, allows companies to find and hire retired scientists for one-off assignments. NineSigma is an online marketplace for innovations, matching seeker companies with solvers in a marketplace similar to InnoCentive. “People mistake this for outsourcing, which it most definitely is not,” Huston says. “Outsourcing is when I hire someone to perform a service and they do it and that’s the end of the relationship. That’s not much different from the way employment has worked throughout the ages. We’re talking about bringing people in from outside and involving them in this broadly creative, collaborative process. That’s a whole new paradigm.”
4. The Masses
In the late 1760s, a Hungarian nobleman named Wolfgang von Kempelen built the first machine capable of beating a human at chess. Called the Turk, von Kempelen’s automaton consisted of a small wooden cabinet, a chessboard, and the torso of a turbaned mannequin. The Turk toured Europe to great acclaim, even besting such luminaries as Benjamin Franklin and Napoleon. It was, of course, a hoax. The cabinet hid a flesh-and-blood chess master. The Turk was a fancy-looking piece of technology that was really powered by human intelligence. Which explains why Amazon.com has named its new crowdsourcing engine after von Kempelen’s contraption. Amazon Mechanical Turk is a Web-based marketplace that helps companies find people to perform tasks computers are generally lousy at – identifying items in a photograph, skimming real estate documents to find identifying information, writing short product descriptions, transcribing podcasts. Amazon calls the tasks HITs (human intelligence tasks); they’re designed to require very little time, and consequently they offer very little compensation – most from a few cents to a few dollars.
InnoCentive and iStockphoto are labor markets for specialized talents, but just about anyone possessing basic literacy can find something to do on Mechanical Turk. It’s crowdsourcing for the masses. So far, the program has a mixed track record: After an initial burst of activity, the amount of work available from requesters – companies offering work on the site – has dropped significantly. “It’s gotten a little gimpy,” says Alan Hatcher, founder of Turker Nation , a community forum. “No one’s come up with the killer app yet.” And not all of the Turkers are human: Some would-be workers use software as a shortcut to complete the tasks, but the quality suffers. “I think half of the people signed up are trying to pull a scam,” says one requester who asked not to be identified. “There really needs to be a way to kick people off the island.”
Peter Cohen, the program’s director, acknowledges that Mechanical Turk, launched in beta in November, is a work in progress. (Amazon refuses to give a date for its official launch.) “This is a very new idea, and it’s going to take some time for people to wrap their heads around it,” Cohen says. “We’re at the tippy-top of the iceberg.”
A few companies, however, are already taking full advantage of the Turkers. Sunny Gupta runs a software company called iConclude just outside Seattle. The firm creates programs that streamline tech support tasks for large companies, like Alaska Airlines. The basic unit of iConclude’s product is the repair flow, a set of steps a tech support worker should take to resolve a problem.
Most problems that iConclude’s software addresses aren’t complicated or time-consuming, Gupta explains. But only people with experience in Java and Microsoft systems have the knowledge required to write these repair flows. Finding and hiring them is a big and expensive challenge. “We had been outsourcing the writing of our repair flows to a firm in Boise, Idaho,” he says from a small office overlooking a Tully’s Coffee. “We were paying $2,000 for each one.”
As soon as Gupta heard about Mechanical Turk, he suspected he could use it to find people with the sort of tech support background he needed. After a couple of test runs, iConclude was able to identify about 80 qualified Turkers, all of whom were eager to work on iConclude’s HITs. “Two of them had quit their jobs to raise their kids,” Gupta says. “They might have been making six figures in their previous lives, but now they were happy just to put their skills to some use.”
Gupta turns his laptop around to show me a flowchart on his screen. “This is what we were paying $2,000 for. But this one,” he says, “was authored by one of our Turkers.” I ask how much he paid. His answer: “Five dollars.”
(Contributing editor Jeff Howe (jeff_howe@wiredmag.com ) wrote about MySpace in issue 13.11. To read more about crowdsourcing, please visit Jeff Howe’s blog).
1. From CNN.com: Caught up in the 'Net
How the Internet has quietly changed our lives
Socially, locally and globally - nothing else has had such a huge influence over the population -- Caleb Chung, UGOBE
"Within a few years, the Internet will turn business upside down. Be prepared -- or die."
So stated The Economist in 1999, a few months before Internet stocks crashed, and the web boom went spectacularly bust.
The Internet has always been associated with hype -- it's prophets proclaimed it as a bold techno wonderland that would change the world.
During the early dotcom boom days, investors fired up by a hysterical media invested millions of dollars at brake neck speed in companies so wobbly and hastily put together they could never last more than a few months. In the end, most didn't.
But now we've all had a few years to live with the Internet, The Economist magazine and many other's predictions seem much more reasonable - possibly even understated. The Web does seem to be genuinely changing the world, and those companies and organizations not ready for it are dying as predicted.
Perhaps without the flash and fuss and revolutionary haste we were promised in those early, heady days, but quietly and effortlessly the Internet has become an integral part of our everyday lives: at home and at work.
The shakeout of weak tech companies in 2000/2001 is now seen as symptom common to most technological revolutions, and marking the point from which the strong companies would grow to increasingly dominate both the marketplace -- and our lives.
"Socially, locally and globally - nothing else has had such a huge influence over the population," says Caleb Chung, co-founder and chief inventor at tech company UGOBE.
"It's multiplied my productivity by a factor of about five," admits philosopher Daniel Dennet. Think how many times you check the web for news, or look for magazine articles. (You're reading this online, after all.) Or how often you read your email every day. When did you start doing that?
Think about how much online shopping you do now. Think about how strong online brands, such as Amazon, Google or AOL, suddenly are. Think about how normal it seems to be able to access the web on a mobile device, or hook up to Wi-Fi in a café or on a train. All of these developments have happened in the last few years.
"[The Internet has] had such a huge impact that I am currently unable to remember how it was to work without it," says anthropologist Daniela Cerqui.
Almost all businesses now use the web to communicate with their customers, order stock and manage accounts. Financial services -- especially the insurance industry -- have blossomed online. Retailers no-longer see the web as a threat and businesses aren't still divided into on- and off-line enterprises, with most high street stores competing in both realms. Telcos are launching online telephone services. Even the music, TV and film industries, which until recently saw the web as threat number one, are beginning to gingerly embrace downloads.
National newspapers are finding a global audience through their Web sites, and Podcasting is blurring the line between print and broadcast media. Blogs are challenging professional journalism and shepherding the news agenda, and readers/viewers are increasingly becoming part of the news by providing their own video clips, pictures and observations to the public through the wealth of interactive features offered by many broadcasters online.
As the marketplace mutates, so consumers have benefited: primarily from an explosion of choice, but also because online bargain hunting is now so easy -- and product comparison sites so ubiquitous -- that prices have been forced down, and business has to offer a better service to stay competitive. Plus being able to shop from your desktop has saved us all a lot of time and made essential purchases far more convenient.
But as well as changing the way we access our banks, news, music, film -- products and services of all kinds -- the Web is genuinely revolutionizing the way we communicate with each other.
"The Internet has had an enormous effect on my work and on my life," says John Searle. "I am able to communicate nearly instantly with people all over the world and the access to information [it provides] enables me to find out what I need to know much more rapidly and efficiently than I ever could by going to a library."
"I can access as much research on any topic as my brain can handle in a given time period," says "robot psychologist" Joanne Pransky. "I am able to work anywhere in the world as long as I have Internet access. I can also play bridge, my favorite pastime, anytime, anywhere, on-line with my favorite partner, my elderly mother who lives 3,000 miles from me."
But perhaps one of the greatest surprises has not been how much the Web has changed our ability to communicate, but how our desire to communicate has actively driven change on the web. Social sites -- like MySpace.com with its 50 million plus members -- have risen from nowhere to form a key part of the social network of many people.
These are genuinely original concepts, without precedent in the offline world, and mark a new direction where innovation is outstripping our existing understanding of what we want -- or where the Web can take us socially and culturally.
"Without question the ability to communicate, share data, develop projects jointly, network has magnified the human mind has changed everything," says Peter Diamandis. "The Internet is the nervous system of a new developing 'meta-intelligence'."
But these rapid changes to something as fundamental a part of our humanity as the way we communicate are not without their concerns.
"As an anthropologist, I stand back to try to understand what all this means," says Cerqui. "What I see is that human beings are flexible and able to get used to almost everything. And once we are used to a new kind of technology, we become unable to live without it. But this does not necessarily means that it is an improvement."
"From a direct social interaction perspective, the Internet has had a negative impact on me," says Pransky. "It may be several days before I have the need to leave my house and have contact with the "outside" world. It has been awhile since I have worked in a group situation. After we have dinner, my family and I often retire to our own individual, confined use of the Internet."
"Technology is not the solution to all problems," says Cerqui. "Social problems ought to be faced with social solutions. For instance, you do not solve the problem of people feeling lonely by connecting them to the Internet, but you contribute in establishing a new kind of society where face-to-face interaction is less important than mediated communication."
Where historically the extent of our social contact was limited to our immediate community, and our information sources restricted by practical, physical limits of conversation and the printed page -- suddenly now we are all at the hub of a personal network wired up to the entire world. It can, at times, seem all too much.
"The pace, the vast wealth of information coming from all directions -- how the heck can you keep up when it comes at you like this? Yes, it seems too much to assimilate," says Mark Reed, Yale Professor of Engineering and Applied Science.
But Reed argues that however confusing and overwhelming the web seems now, overcoming this helplessness will be the next stage in our social evolution.
"Watch a child play a computer game or surf the Internet -- truly child's play. How parochial of us to assume that just because our imprinted minds can't keep up, that fresh new minds won't be able to either. We are amazingly adaptable, which is why we survived -- our progeny will not only adapt, they will excel.
"However, we need to give them the right tools. We need to teach them to think critically and objectively -- teach them to grasp scientific methodology and embrace technological literacy.
"Unfortunately our society does a very poor job of this. The future of the human race is too important to leave to politicians and corporations. A scientifically educated global population will help us focus on the truly important problems, such as energy -- arguably the most important crisis we as a species will face -- instead of wasting efforts on petty squabbles for short term economic and political gain."
However we teach our children and however our society evolves, many scientists already have a clear vision of the way technology is leading us. 'Singularity,' the fusion of human, machine and the communication capacity of the web may enable a spectacular and fundamental shift in our understanding of human consciousness.
"I am still a big believer in Artificial Intelligence; new software 'shells' that surround us as individuals and becomes our interface with the outside world," says Diamandi. "These interfaces will allow us to communicate with individuals and machines more efficiently.
"The Internet will merge into these software shells, serving as a global nervous system interconnecting people to people in the way single cell life-forms grouped into multi-cellular organisms and eventually into an organism as complex as the human body."
3. A New Open-Source Politics
Just as Linux lets users design their own operating systems, so 'netroots' politicos may redesign our nominating system.
By Jonathan Alter
June 5, 2006 issue - Bob Schieffer of CBS News made a good point on "The Charlie Rose Show" last week. He said that successful presidents have all skillfully exploited the dominant medium of their times. The Founders were eloquent writers in the age of pamphleteering. Franklin D. Roosevelt restored hope in 1933 by mastering radio. And John F. Kennedy was the first president elected because of his understanding of television.
Will 2008 bring the first Internet president? Last time, Howard Dean and later John Kerry showed that the whole idea of "early money" is now obsolete in presidential politics. The Internet lets candidates who catch fire raise millions in small donations practically overnight. That's why all the talk of Hillary Clinton's "war chest" making her the front runner for 2008 is the most hackneyed punditry around. Money from wealthy donors remains the essential ingredient in most state and local campaigns, but "free media" shapes the outcome of presidential races, and the Internet is the freest media of all.
No one knows exactly where technology is taking politics, but we're beginning to see some clues. For starters, the longtime stranglehold of media consultants may be over. In 2004, Errol Morris, the director of "The Thin Blue Line" and "The Fog of War," on his own initiative made several brilliant anti-Bush ads (they featured lifelong Republicans explaining why they were voting for Kerry). Not only did Kerry not air the ads, he told me recently he never even knew they existed. In 2008, any presidential candidate with half a brain will let a thousand ad ideas bloom (or stream) online and televise only those that are popular downloads. Deferring to "the wisdom of crowds" will be cheaper and more effective.
Open-source politics has its hazards, starting with the fact that most people over 35 will need some help with the concept. But just as Linux lets tech-savvy users avoid Microsoft and design their own operating systems, so "netroots" political organizers may succeed in redesigning our current nominating system. But there probably won't be much that's organized about it. By definition, the Internet strips big shots of their control of the process, which is a good thing. Politics is at its most invigorating when it's cacophonous and chaotic.
To begin busting up the dumb system we have for selecting presidents, a bipartisan group will open shop this week at Unity08.com. This Internet-based third party is spearheaded by three veterans of the antique 1976 campaign: Democrats Hamilton Jordan and Gerald Rafshoon helped get Jimmy Carter elected; Republican Doug Bailey did media for Gerald Ford before launching the political TIP SHEET Hotline. They are joined by the independent former governor of Maine, Angus King, and a collection of idealistic young people who are also tired of a nominating process that pulls the major party candidates to the extremes. Their hope: to get even a fraction of the 50 million who voted for the next American Idol to nominate a third-party candidate for president online and use this new army to get him or her on the ballot in all 50 states. The idea is to go viral—or die. "The worst thing that could happen would be for a bunch of old white guys like us to run this," Jordan says.
The Unity08 plan is for an online third-party convention in mid-2008, following the early primaries. Any registered voter could be a delegate; their identities would be confirmed by cross-referencing with voter registration rolls (which would also prevent people from casting more than one ballot). That would likely include a much larger number than the few thousand primary voters who all but nominate the major party candidates in Iowa and New Hampshire. This virtual process will vote on a centrist platform and nominate a bipartisan ticket. The idea is that even if the third-party nominee didn't win, he would wield serious power in the '08 election, which will likely be close.
There are plenty of ways for this process to prove meaningless, starting with the major parties deciding to nominate independent-minded candidates like John McCain (OK, the old McCain) or Mark Warner. Third-party efforts have usually been candidate-driven, and the centrist names tossed around by way of example (Chuck Hagel, Sam Nunn, Tom Kean) don't have much marquee value in the blogosphere. And the organizers would have to design safeguards to keep the whole thing from being hijacked.
But funny things happen in election years. With an issue as eye-glazing as the deficit, a wacky, jug-eared Texan named Ross Perot received 19 percent of the vote in 1992 and 7 percent in 1996. He did it with "Larry King Live" and an 800 number. In a country where more than 40 percent of voters now self-identify as independents, it's no longer a question of whether the Internet will revolutionize American politics, but when.
(For more, go to JonathanAlter.com)
4. Beyond the Open-Source Hype
By Caroline Benner
Across the globe, politicians are embracing open-source software with grand pronouncements and great expectations. Although they are correct to identify potential benefits, software is far more complicated than their talking points, and it may disappoint those with outsized hopes.
Governments around the world are enchanted by open-source software. Unlike proprietary software, for which the code is kept secret, the open-source variety can be copied, modified, and shared. In 2003, Brazil, for instance, announced plans to move 80 percent of its state computers to the open-source operating system Linux. In 2003, Taiwan launched a “National Open Source Plan” to build a software industry that could replace the proprietary software in government and education. In France, the ministries of Defence, Culture, and Economy all use open-source operating systems. Japan, China, and South Korea are working together on an open-source alternative to Microsoft’s Windows operating system.
Those who believe open source is superior to proprietary software often tout its economic and strategic benefits. For example, they believe that the total cost of ownership of open-source software is lower than that of proprietary software because they avoid the expensive licensing fees that companies like Microsoft charge. In 2002, Finland estimated that it could save 26 million euros a year by having state agencies switch to Linux. Open-source advocates also believe the software has technical advantages over proprietary. It is more secure than its proprietary counterpart, they say, because the open-source development process produces better software.
And governments tend not to like software they can’t audit for trapdoors that would allow an outsider access. Many countries also argue that open source is better than proprietary software at adapting to local needs, because you can change the behavior of the program by changing its code. With tiny budgets to spend on foreign-produced information technology and little infrastructure to create their own, open source looks like an attractive way for poor nations to gain access to the information age.
Trouble is, the benefits of open source are not always so clear-cut. Software is too complicated a creation to be captured in rhetoric, and assertions about some of the technical benefits of open source fail to tell the whole story.
Consider the issue of security. In a 2002 letter to Microsoft, Peruvian Congressman Edgar David Villanueva Núñez noted that, “Relative to the security of the software itself, it is well known that all software (whether proprietary or free) contains ‘errors’ or ‘bugs’ (in programmers’ slang). But it is also well-known that the bugs in free software are fewer.” Yet, ask computer security experts and they’ll tell you that’s not necessarily true. Software, with its millions of lines of code, is so complicated that experts don’t know for sure that open source has fewer bugs, nor can they say with certainty that having fewer bugs makes open source more secure. “There are really two reasons that it is very difficult to know whether software is secure,” says Stanford University computer scientist Alex Aiken. “The first reason is that even the simplest software program consists of hundreds of thousands to millions of parts, and potentially all of these have to be correct, or the system may have security vulnerabilities. The second reason is that we have no technology for systematically checking that the parts are correct and fit together in a way that ensures security.”
The Chinese have a preference for open source because they distrust software that cannot be audited, a concern that became especially acute after the discovery of the phrase “_NSAKEY” (thought to refer to the National Security Agency) in the code of Microsoft’s Windows software in 1999. But auditing any source code in order to ensure there are no security vulnerabilities is nigh on impossible. Figuring that governments nevertheless prefer seeing source code to not seeing it, Microsoft has sought to allay worries over trapdoors by allowing governments to peruse its code.
Politicians, meanwhile, enjoy the notion that open source can be adapted by their people to better address local needs. Open source “has the potential of empowering people in ways proprietary software does not allow. It offers users the choice to … customize the software,” South Africa concluded in its 2003 proposed strategy on the use of open source in its government. It’s true that access to source code offers the most flexibility in making a piece of software behave differently. For instance, when a piece of software is not offered in a particular language, programmers can alter open-source code to translate it. However, it is misleading to say that open source empowers people in ways proprietary software does not. Both open source and proprietary software allow you to change the behavior of a software program in significant ways without touching the program’s source code. The truth is that software authors, whether they work for a large software firm or no one at all, want users to adapt their product to specific locations and needs. Microsoft makes a living out of making its software customizable while still closely guarding its source code.
Furthermore, software is so complex that serious source code manipulation and maintenance is a high-cost endeavor, not a job one can plunge right into. It is a task for a large group of highly skilled people with a lot of time—people who, more and more, are being funded by deep pockets, including, yes, U.S. technology corporations such as IBM and Hewlett Packard. Software becomes more interesting—indeed, rhetoric-worthy—when it promises a better future. Open source may well deliver that promise, but computer science is too young a discipline, and there is too much we do not yet know about software to be so sure. Governments may be wise to choose open source. They just shouldn’t count on it to do much more than what software does best: process the data of the information age.
(Caroline Benner is a fellow at the University of Washington’s Institute for International Policy. From 2001 to 2003, Ms. Benner was a consultant with the geopolitical policy and strategy group at Microsoft.)
5. Open source ubuntu -- by Becky Hogge
Despite its world-saving image, open-source software has not made much real revolution. But Becky Hogge finds hope in new software "for human beings", designed to bridge the digital divide.
"You know when ubuntu is there, and it is obvious when it is absent. It has to do with what it means to be truly human, to know that you are bound up with others in the bundle of life." – Archbishop Desmond Tutu, God Has A Dream
I have lost count of the many seminars, conferences and talks I've attended recently where that magic phrase "the-open-source-operating-system-Linux" has resounded. Whether uttered by a New Labour policy wonk or a Polish art historian it has the same effect: the crowd of academics, bloggers and civil servants goes gooey, basking smugly in the image of thousands of bearded geeks quietly subverting the capitalist beast from the comfort of their bedrooms, chewing caffeine gum and trotting out code that rivals Microsoft's.
But it's a bit more complicated than that. Linux comes in all kinds of different flavours, called distributions or "distros", and with each distro the code-base, licensing terms, support model, philosophy and community or company organisation varies. Understand these differences, and the utopian prism through which the average non-geek views open source software shifts.
Most non-programmers only talk about Linux, they don't run it. Scanning any of these pseudo-tech conferences, I'll find most delegates are liveblogging on Apple Mac laptops, the nice guy alternative to Microsoft. But this non-programmer did run Linux, thanks in large part to my wonderful, if slightly militant, live-in system administrator. Not only did I run Linux, I ran the Debian distribution of Linux – a distro so pure in the eyes of most geeks, I attracted admiration from all who knew.
Debian is a community-based distro, and of all Linux distributions, conforms most to the idealist stereotype of Linux. Debian holds elections for project leaders and its code is generally considered to be the most reliable, secure code available. But Debian made me cry. Away from the admiring glances of my fellow tech-commentators, if my live-in system admin had gone out for the evening and I wanted to install a printer, listen to online radio, or upload holiday photos, you would more often than not find me a good forty-five minutes later, staring blankly at the screen with watery eyes, or languishing in a puddle of my own ineptitude playing solitaire (which is much, much better on Linux).
Technology is no good if people can't use it. And by people I don't just mean me (please, reader, dry your eyes for my techno-foolishness). I mean those in the developing world, priced out of running the latest Microsoft or Apple software. Linux offers a real opportunity for developing nations – not only because it is free to own but also because it can run on hardware that the developed world would otherwise send to landfills. But if you need a degree in computer science to use it, then barriers to access are just as real.
Enter Ubuntu Linux. Founded in 2004 by South African Mark Shuttleworth, its goal is "Linux for human beings" – a usable Linux for desktop computing by non-geeks. Ubuntu is a traditional African concept describing the humanising quality of people's relationships with one another – "I am because we are" – which fits very nicely with concepts of sharing and open source. The Ubuntu distribution is based largely on Debian code, but with a strong focus on usability, and with predictable support and release patterns. And it works – Ubuntu has been the most popular distro of Linux since 2005, and since I made the switch last year those tearful evenings in front of the computer screen have become a distant memory.
Shuttleworth, the self-styled "first African in space" and an early dotcom boomer, ploughed profits from the 1999 sale of his website security firm VeriSign into the Shuttleworth Foundation, a non-profit organisation that supports education and social innovation in Africa. The Shuttleworth Foundation has funded some excellent projects – most notably the freedom toaster, a free vending-machine of open source software and other digital material. The freedom toaster solves another perennial problem associated with open source software in the developing world: most open source applications need to be downloaded from the internet. This is bad news for anybody without access to a fast broadband connection (which includes most of Africa). But all the freedom toaster requires is an electrical socket for it happily to churn out custom-made CDs of Linux distros, open source applications and Wikipedia collections.
The newest mission for the Shuttleworth Foundation is to design a ten-year curriculum in computer programming to be used in South African schools, and it has attracted some big guns in coding. Rumour has it that Guido Van Rossum, project leader, or as the geeks would have it, "benevolent dictator for life" on object-orientated programming language Python, is hoping to work with Alan Kay, a founding thinker behind object-orientated programming (which proponents claim is easier to pick up for coding newbies) and co-developer of the hundred-dollar laptop, on an educational programming environment to contribute to the mission.
But Ubuntu itself isn't part of the Shuttleworth foundation – it's supported by Canonical Ltd, a for-profit company owned by Shuttleworth. Unlike Debian, which is run by a fairly democratically organised community, Ubuntu programmers are often hired and paid by Shuttleworth. And on 1 June, Ubuntu will release its first "enterprise" distro – Ubuntu Dapper Drake. The enterprise edition will come with longer support terms, which will make it attractive to corporations. Indeed Sun Microsystems, who made a reputation out of belittling Linux in favour of their Unix-based operating system Solaris were making very positive noises about working with Ubuntu at a recent conference in San Francisco.
So maybe Ubuntu wasn't such an altruistic endeavour after all. But is the fact Shuttleworth might make a profit bad news for the thousands of people downloading the desktop version every day? It's hard to say. Ubuntu could quite easily stay true to its first goal of Linux for human beings, providing usable free software for non-geeks in the developing and developed world while also making a tidy profit from support contracts with commercial companies on the side. On the face of it, this looks like the perfect social enterprise. To achieve it, however, Shuttleworth will have to keep the Debian community on side, since – beyond the usability question – they still provide most of the code behind the Ubuntu distro.
Without Shuttleworth's entrepreneurial flair, would such a popular, usable version of Linux have been possible? We'll never know, but it is fair to say that the Debian community weren't known for their interest in the capabilities of those less technically-literate than them. Whatever happens to Ubuntu, it is clear that there is a lot more to that "open-source-operating-system-Linux" than first meets the eye.
3 Comments:
Dude, you can't just steal Lanier's whole fucking essay and post it on your site.
Dude, I don't mean to steal anything; all I do is post the most interesting things I read every day on my site so others can read them, too.
The wish is that if someone is interested in any writer, such as Lanier, they'll google him or her and learn more about the writer, maybe buy a book or two by that writer, or follow his column if it's a journalist, etc.
I'm not stealing; I'm disseminating, trying to make the people and stuff I like more widely known. This is how the internet works, and why it's called viral.
Adam
sure, but why don't you put a single link then?
Post a Comment
<< Home