Adam Ash

Your daily entertainment scout. Whatever is happening out there, you'll find the best writing about it in here.

Wednesday, April 04, 2007

Rewiring your brain - and reconnecting with it

1. Rewiring the Brain
Neuroplasticity can allow for treatment of senility, post-traumatic stress, _obsessive-compulsive disorder, and depression—and Buddhists have been capitalizing on it for millenia.
by Matthew Blakeslee/Discover Magazine


If old dogs haven’t been able to learn new tricks, maybe that’s because no one has known how to teach them properly. Until quite recently orthodox neuroscience held that only the brains of young children are resilient, malleable, and morphable—in a word, plastic. This neuroplasticity, as it is called, seems to fade steadily as the brain congeals into its fixed adult configuration. Infants can sustain massive brain damage, up to the loss of an entire cerebral hemisphere, and still develop into nearly normal adults; any adult who loses half the brain, by contrast, is a goner. Adults can’t learn to speak new languages without an accent, can’t take up piano in their fifties then go on to play Carnegie Hall, and often suffer strokes that lead to permanent paralysis or cognitive deficiencies. The mature brain, scientists concluded, can only decline.

It turns out this theory is not just wrong, it is spectacularly wrong. Two new books, Train Your Mind, Change Your Brain (Ballantine Books, $24.95) by science journalist Sharon Begley and The Brain That Changes Itself (Viking, $24.95) by psychiatrist Norman Doidge, offer masterfully guided tours through the burgeoning field of neuroplasticity research. Each has its own style and emphasis; both are excellent.

Both authors present more or less the same historical background, recounting landmark experiments by a small constellation of neuroscientists who doggedly championed the idea of adult neuroplasticity through its wilderness years, from the 1960s through mid-1990s. Both also describe how mainstream neuroscience, to its chagrin as well as its delight, is finally warming to the idea that much of the neural dynamism in the childhood brain remains active all through life (it just needs a little help to manifest fully). Finally, both authors conclude that adult neuroplasticity is a vastly undertapped resource, one with which Western medicine and psychology are just now coming to grips. An important emerging research agenda is to figure out ways to direct and maximize this brain repair and reorganization.

Those permanently paralyzed stroke patients? Quite a few are now recovering far more function than conventional physical therapy allows, thanks to new rehabilitation programs that capitalize on neuroplasticity. Other breakthroughs include treatments for learning disorders like dyslexia, mental training programs that can halt and even reverse senility, and promising treatments for post-traumatic stress, _obsessive-compulsive disorder, depression, and other notorious ills of the human mind. Some of the new techniques are so low-tech that they could have been employed by physicians of antiquity, if only they had known about them.

In fact, great breakthroughs in applied neuroplasticity were apparently made long ago by practitioners of Tibetan Buddhism. The scientific investigation of Buddhist meditation, one of the most fascinating areas of neuroplasticity research today, is the focus and polestar of Begley’s book. In elegant and lucid prose, she recounts the story of a remarkable collaboration forged just a few years ago between Western neuroscientists and senior Tibetan Buddhist monks. Their spiritual leader, the Dalai Lama, who brokered much of the discourse, comes across as remarkably astute and friendly to science .

This unusual syncretic exercise between the academy and a major world religion may rouse suspicion among hard-nosed skeptics, but an open mind here will be rewarded. The research does not concern metaphysical claims regarding reincarnation and karma; rather, it involves measurable, replicable effects of Buddhist meditation practices on the mind and brain. This rigorous mental training drives neuroplasticity in ways that awe many of the scientists studying it. Brain scans reveal that the neural activity of highly trained monks is off the charts, relative to meditation novices, in circuits that involve maternal love (caudate), empathy (right insula), and feelings of joy and happiness (left prefrontal cortex). Even when these monks are not meditating, their brains bear the imprints of their psychic workouts. The latter two structures, for instance, are anatomically enlarged. Based on results like these, Begley holds out hope that our emotional lives and personalities, far from being carved in stone by our genes and early experiences, will prove as sculptable through mental training as our bodies are through physical training.

Doidge’s book covers more territory at the expense of tighter focus. This is not a bad thing, since the end result is a solid survey of one of neuroscience’s hottest areas. A practicing psychiatrist, Doidge chronicles not only the science and theory of neuroplasticity, but also his conversations with diverse patients—dealing with all manner of emotional and neurological afflictions—who are directly benefiting from the science. He speaks with researchers, too, about their moments of inspiration and insight. Along with eminently clear accounts of the relevant concepts and experiments, he gives well-turned descriptions of personalities and in-the-moment reactions. This wider sampling and more intimate depiction makes for an appealing read.

If either book can be faulted, it would be for the occasional whiff of overstatement. At times each author seems to say that our brains’ potential for transformation is essentially infinite, rather than merely astonishing and paradigm busting. But these excesses of enthusiasm are understandable, given the present-day backdrop. Science, like any other human endeavor, is susceptible to trends and pendulous swings of groupthink. The current vogue is for “neurogenetic determinism,” the view that your genes and subconscious are the true, essential shapers of who you are and how you think and behave; the conscious mind is little more than a self-important figurehead along for the ride. Begley and Doidge wade against this current with a strong message of hope: By recognizing neuroplasticity as a real and powerful force, we can tilt our theories of mind back into a realm where choice and free will are meaningful concepts, and where radical improvement to the human condition is possible using the right, scientifically proven techniques. The wonderful thing is, the hope they offer is not of the blind variety. There are solid, empirical reasons to think they may be right.


2. The Thinking Machine
Jeff Hawkins created the Palm Pilot and the Treo. Now he says he’s got the ultimate invention: software that mimics the human brain.
By Evan Ratliff /Wired


“When you are born, you know nothing.”

This is the kind of statement you expect to hear from a philosophy professor, not a Silicon Valley executive with a new company to pitch and money to make. Yet Jeff Hawkins drops this epistemological axiom while sitting at a coffee shop downstairs from his latest startup. A tall, rangy man who is almost implausibly cheerful, Hawkins created the Palm and Treo handhelds and cofounded Palm Computing and Handspring. His is the consummate high tech success story, the brilliant, driven engineer who beat the critics to make it big. Now he’s about to unveil his entrepreneurial third act: a company called Numenta. But what Hawkins, 49, really wants to talk about — in fact, what he has really wanted to talk about for the past 30 years — isn’t gadgets or source codes or market niches. It’s the human brain. Your brain. And today, the most important thing he wants you to know is that, at birth, your brain is completely clueless.

After a pause, he corrects himself. “You know a few basic things, like how to poop.” His point, though, is that your brain starts out with no actual understanding or even representation of the world or its objects. “You don’t know anything about tables and language and buildings and cars and computers,” he says, sweeping his hand to represent the world at large. “The brain has to, on its own, discover that these things are out there. To me,” he adds, “that’s a fascinating idea.”

It’s this fascination with the human mind that drove Hawkins, in the flush of his success with Palm, to create the nonprofit Redwood Neuroscience Institute and hire top neuroscientists to pursue a grand unifying theory of cognition. It drove him to write On Intelligence , the 2004 book outlining his theory of how the brain works. And it has driven him to what has been his intended destination all along: Numenta. Here, with longtime business partner Donna Dubinsky and 12 engineers, Hawkins has created an artificial intelligence program that he believes is the first software truly based on the principles of the human brain. Like your brain, the software is born knowing nothing. And like your brain, it learns from what it senses, builds a model of the world, and then makes predictions based on that model. The result, Hawkins says, is a thinking machine that will solve problems that humans find trivial but that have long confounded our computers — including, say, sight and robot locomotion.

Hawkins believes that his program, combined with the ever-faster computational power of digital processors, will also be able to solve massively complex problems by treating them just as an infant’s brain treats the world: as a stream of new sensory data to interpret. Feed information from an electrical power network into Numenta’s system and it builds its own virtual model of how that network operates. And just as a child learns that a glass dropped on concrete will break, the system learns to predict how that network will fail. In a few years, Hawkins boasts, such systems could capture the subtleties of everything from the stock market to the weather in a way that computers now can’t.

Numenta is close to issuing a “research release” of its platform, which has three main components: the core problem-solving engine, which works sort of like an operating system based on Hawkins’ theory of the cortex; a set of open source software tools; and the code for the learning algorithms themselves, which users can alter as long as they make their creations available to others. Numenta will earn its money by owning and licensing the basic platform, and Hawkins hopes a new industry will grow up around it, with companies customizing and reselling the intelligence in unexpected and dazzling ways. To Hawkins, the idea that we’re born knowing nothing leads to a technology that will be vastly more important than his Palm or Treo — and perhaps as lucrative.

But wait, your no-longer clueless brain is warning you, doesn’t this sound familiar? Indeed, Hawkins joins a long line of thinkers claiming to have unlocked the secrets of the mind and coded them into machines. So thoroughly have such efforts failed that AI researchers have largely given up the quest for the kind of general, humanlike intelligence that Hawkins describes. “There have been all those others,” he acknowledges, “the Decade of the Brain, the 5th Generation Computing Project in Japan, fuzzy logic, neural networks, all flavors of AI. Is this just another shot in the dark?” He lets the question hang for a moment. “No,” he says. “It’s quite different, and I can explain why.”

Jeff Hawkins grew up on Long Island, the son of a ceaseless inventor. While working at a company called Sperry Gyroscope in the 1960s, Robert Hawkins created the Sceptron, a device that could be used to (among other things) decode the noises of marine animals. It landed him on the cover of Weekly Reader magazine. “I was in the third grade, and there was my dad standing by this pool holding a microphone,” Hawkins recalls. “And a dolphin is sticking its nose out of the water, speaking into it.”

As a teenager, Hawkins became intrigued by the mysteries of human intelligence. But as he recounts in On Intelligence , cowritten with New York Times reporter Sandra Blakeslee, for 25 years he pursued his dream to develop a theory of how the brain works and to create a machine that can mimic it as an amateur. Rejected from graduate school at MIT, where he had hoped to enter the AI lab, he enrolled in the biophysics PhD program at UC Berkeley in the mid-1980s, only to drop out after the school refused to let him forgo lab work to pursue his own theories.

Instead, Hawkins found success in business at Intel, at Grid Computing, and eventually at Palm and then Handspring. But all along, Hawkins says, his ultimate goal was to generate the resources to pursue his neuro-science research. Even while raising the first investments for Palm, he says, “I had to tell people, ‘I really want to work on brains.’” In 2002, he finally was able to focus on brain work. He founded the Redwood Neuroscience Institute, a small think tank that’s now part of UC Berkeley, and settled in to write his book.

It was while he was in his PhD program that Hawkins stumbled upon the central premise of On Intelligence : that prediction is the fundamental component of intelligence. In a flash of insight he had while wondering how he would react if a blue coffee cup suddenly appeared on his desk, he realized that the brain is not only constantly absorbing and storing information about its surroundings — the objects in a room, its sounds, brightness, temperature — but also making predictions about what it will encounter next.

On Intelligence elucidates this intelligence-as-prediction function, which Hawkins says derives almost entirely from the cortex, a portion of the brain that’s basically layers of neurons stacked on top of one another. In a highly simplified sense, the cortex acquires information (about letters on this page, for example) from our senses in fractional amounts, through a large number of neurons at the lowest level. Those inputs are fed upward to a higher layer of neurons, which make wider interpretations (about the patterns those letters form as words) and are then passed higher up the pyramid. Simultaneously, these interpretations travel back down, helping the lower-level neurons predict what they are about to experience next. Eventually, the cortex decodes the sentence you are seeing and the article you are reading.

Considering it came from an outsider, On Intelligence received surprising accolades from neuroscientists. What critics there were argued not that the book was wrong but that it rehashed old research. “Still,” says Michael Merzenich, a neuroscientist at UC San Francisco, “no one has expressed it in such a cogent way. Hawkins is damn clever.”

As Hawkins was writing On Intelligence , an electrical engineering graduate student named Dileep George was working part-time at the Redwood Neuro-science Institute, looking for a PhD topic involving the brain. He heard Hawkins lecture about the cortex and immediately thought that he might be able to re-create its processes in software.

George built his original demonstration program, a basic representation of the process used in the human visual cortex, over several weekends. Most modeling programs are linear; they process data and make calculations in one direction. But George designed multiple, parallel layers of nodes — each representing thousands of neurons in cortical columns and each a small program with its own ability to process information, remember patterns, and make predictions.

George and Hawkins called the new technology hierarchical temporal memory, or HTM. An HTM consists of a pyramid of nodes, each encoded with a set of statistical formulas. The whole HTM is pointed at a data set, and the nodes create representations of the world the data describes — whether a series of pictures or the temperature fluctuations of a river. The temporal label reflects the fact that in order to learn, an HTM has to be fed information with a time component — say, pictures moving across a screen or temperatures rising and falling over a week. Just as with the brain, the easiest way for an HTM to learn to identify an object is by recognizing that its elements — the four legs of a dog, the lines of a letter in the alphabet — are consistently found in similar arrangements. Other than that, an HTM is agnostic; it can form a model of just about any set of data it’s exposed to. And, just as your cortex can combine sound with vision to confirm that you are seeing a dog instead of a fox, HTMs can also be hooked together. Most important, Hawkins says, an HTM can do what humans start doing from birth but that computers never have: not just learn, but generalize .

At Numenta’s Menlo Park, California, offices one afternoon this winter, George showed off the latest version of his original picture-recognition demo. He had trained the HTM by feeding it a series of simple black-and-white pictures — dogs, coffee cups, helicopters — classified for the HTM into 91 cate-gories and shown zigzagging all over the screen in randomly chosen directions. The nodes at the bottom level of the HTM sense a small fraction of each image, a four- by four-pixel patch that the node might assess as a single line or curve. That information is passed to the second-level node, which combines it with the output of other first-level nodes and calculates a probability, based on what it has seen before, that it is seeing a cockpit or a chopper blade. The highest-level node combines these predictions and then, like a helpful parent, tells the lower-level nodes what they’re seeing: a helicopter. The lower-level nodes then know, for example, that the fuzzy things they can’t quite make out are landing skids and that the next thing they see is more apt to be a rear rotor than a tennis racket handle.

On his laptop, George had several pictures the HTM had never seen before, images of highly distorted helicopters oriented in various directions. To human eyes, each was still easily recognizable. Computers, however, haven’t traditionally been able to handle such deviations from what they’ve been programmed to detect, which is why spambots are foiled by strings of fuzzy letters that humans easily type in. George clicked on a picture, and after a few seconds the program spit out the correct identification: helicopter. It also cleaned up the image, just as our visual cortex does when it turns the messy data arriving from our retinas into clear images in our mind. The HTM even seems to handle optical illusions much like the human cortex. When George showed his HTM a capital A without its central horizontal line, the software filled in the missing information, just as our brains would.

George’s results with images are impressive. But the challenge facing any machine intelligence is to expand such small-scale experiments to massively complex problems like complete visual scenes (say, a helicopter rescuing someone on top of a building) or the chaotic dynamics of weather. Tomaso Poggio, a computational neuro-scientist at MIT’s McGovern Institute for Brain Research, says he’s intrigued by the theory but that “it would be nice to see a demonstration on more challenging problems.”

Similar criticism was aimed at the last promising AI technology supposedly based on the brain: neural networks. That technology rose to prominence in the 1980s. But despite some successes in pattern recognition, it never scaled to more complex problems. Hawkins argues that such networks have traditionally lacked “neuro-realism”: Although they use the basic principle of inter-connected neurons, they don’t employ the information-processing hierarchy used by the cortex. Whereas HTMs continually pass information up and down a hierarchy, from large collections of nodes at the bottom to a few at the top and back down again, neural networks typically send information through their layers of nodes in one direction — and if they send information in both directions, it’s often just to train the system. In other words, while HTMs attempt to mimic the way the brain learns — for instance, by recognizing that the common elements of a car occur together — neural networks use static input, which prevents prediction.

Hawkins is relying on his own fidelity to the brain to overcome the scale problems of the past. “If you believe this is the mechanism that’s actually being used in the cortex — which I do,” he says, “then we know it can scale to a certain size because the cortex does. Now, I haven’t proven that. The proof comes in, you know, doing it.”

Unlike most startups, Numenta has no marketing department, nor even a discernible strategy for recruiting customers. But who needs marketing when you are deluged by daily emails — many from researchers, engineers, and executives who have read On Intelligence — asking for your technology?

Already, Numenta is in discussion with automakers who want to use HTMs in smart cars. Analyzing the data from a host of cameras and sensors inside and outside the car, the system would do what a human passenger can do if a driver’s eyelids droop or a car drifts from its lane: realize the driver is too drowsy and sound a warning.

Numenta is also working with Edsa Micro, a company that designs software to monitor power supplies for operations like offshore oil platforms and air-traffic controllers. The firm currently models a power system down to the smallest detail, ensuring it can continue operating during contingencies like power spikes and explosions. Edsa’s software also collects data from thousands of temperature, voltage, current, and other sensors. If that information could be analyzed in real time, it could signal potential power failures.

That kind of analysis is what the company expects Numenta’s software will do, and Edsa is now setting up customized HTMs for each of the electrical system’s “senses.” Engineers program the bottom-level nodes to accept information about the power system. After that, the HTM is fed Edsa’s historical sensory data, representing the electrical system’s normal state of affairs. When the system goes live, most likely in about a year, Edsa hopes it will be able to generalize the sensory data into an understanding of whether an electrical network is running smoothly or is overtaxed. If the latter, an HTM might send out a signal: Explosion risk high. “We’ve seen some incredible speed improvements,” says Adib Nasle, Edsa’s presi-dent, about the work done so far. “Some approaches, you give too many examples and they get dumber. HTM seems not to suffer from that. It’s pretty impressive.”

The graveyard of AI, of course, is littered with impressive technologies that died on the development table. One can’t help thinking of the Sceptron and of Hawkins’ father holding its microphone up to a dolphin’s snout. In 1964, Sceptron makers promised it would be “a self-programming, small-size, and potentially inexpensive device operating in real time to recognize complex frequency patterns” — a device that could someday be used to translate the language of dolphins.

Asked whether, given the fate of past AI promises, he has even the smallest doubt about the theories behind Numenta, Hawkins is unflinching. “Is the platform that we are shipping going to be around in 10 years? Probably not. Will the ideas be around? Absolutely.” He breaks into a wide smile. “The core principle, the hierarchical temporal memory component, I cannot imagine being wrong.”

[Contributing editor Evan Ratliff (www.atavistic.org) wrote about machine translation in issue 14.12.]

How Numenta’s Software IDs a Chopper
Scan and match:
1) The system is shown a poor-quality image of a helicopter moving across a screen. It’s read by low-level nodes that each see a 4 x 4-pixel section of the image.
2) The low-level nodes pass the pattern they see up to the next level.
3) Intermediate nodes aggregate input from the low-level nodes to form shapes.
4) The top-level node compares the shapes against a library of objects and selects the best match.
Predict and refine:
5) That info is passed back down to the intermediate - level nodes so they can better predict what shape they’ll see next.
6) Data from higher-up nodes allows the bottom nodes to clean up the image by ignoring pixels that don’t match the expected pattern (indicated above by an X). This entire process repeats until the image is crisp.
—Greta Lorge

0 Comments:

Post a Comment

<< Home