Four high-tech advances that will blow your low-tech mind
1. In the Lab: Robots That Slink and Squirm
By JOHN SCHWARTZ /NY Times
MEDFORD, Mass. — The robot lies dissected on the black slab of a lab table, its silicone rubber exterior spread and flattened like a trophy snakeskin. Hair-thin wires run in a zigzag line along the inner length of its pale artificial flesh.
Barry Trimmer flicks a small switch and the wires contract, causing the silicone to bunch up; the skin crawls, so to speak.
So does mine.
“It’s very organic,” Dr. Trimmer says with a smile. Apparently, “organic” is a technical euphemism for “creepy.” But it is eerily lifelike, and that is the point.
Robots, once the stuff of science fiction, are everywhere. Robotic geologists are puttering around on Mars, little Roombas suck up dirt in the breakfast nook. But most robots are made up of hard components and don’t much resemble the creatures that walk, crawl and squirm all around us.
At Tufts University , a multidisciplinary team of researchers wants to take a softer approach. The Biomimetic Technologies for Soft-bodied Robots project is trying to make an ersatz caterpillar that will move around in pretty much the same way as the real thing. The researchers see the potential to use the squishable, relatively simple creations to find land mines, repair machinery in hard-to-reach spots and even diagnose and treat diseases.
The project, part of a wave of interest in life-imitating robotics, is a collaboration of seven Tufts faculty members from five departments in the schools of engineering and arts and sciences. Dr. Trimmer comes from the field of neurobiology, where he has been studying the tobacco hornworm, Manduca sexta, since 1990. He has long been fascinated by the way that a seemingly simple creature like the hornworm (which, confusingly enough, is not a worm but a caterpillar) can twist its body in almost any direction and climb among the tree branches.
He pulls a box from his backpack and takes out one of the caterpillars, and puts it on the back of my hand; the caterpillar, oddly hairless and the greenish-blue color of classic Crest toothpaste, has a slightly raspy feeling to the grippers at the ends of its many legs. It rears up and twists to face backward. “How do you make a machine move with that kind of versatility and dexterity?” Dr. Trimmer asks.
The problem with motion and conventional robots is that hard joints don’t just allow movement, and they restrict the range of motion. “Each joint adds exponential problems of control,” he says. The multiple-jointed arm of the space shuttle, for example, can take cameras, equipment and astronauts to an astonishing array of positions, but the process of planning every movement is enormously complex.
Yet, the Tufts researchers point out, a caterpillar needs no postgraduate training before it begins to slink across a leaf. Remove the hornworm’s primitive brain, and it still trudges forward. So Dr. Trimmer suggests that much of the secret of locomotion is inherent in the muscles and the body. The hornworm has just 70 muscles per segment, with just one nerve controlling each muscle, for the most part. The researchers suspect that the wonderfully flexible locomotion of the caterpillar might emerge naturally from relatively simple rules.
The researchers are seeking a similar elegance in their creations. The initial creatures are hollow tubes. The “muscles” are wire springs made from shape-memory alloy. Electrical current heats the springs, causing them to constrict; once the current stops, the elastic skin stretches the wire back into its resting shape. “It’s almost childish, the simplicity of the design,” Dr. Trimmer says.
The skin is a silicone rubber that goes by the brand name Dragon Skin, and its composition can be manipulated so that it can be leathery-tough or so supple and clammy that it gives a sense of what it must be like to shake hands with Gollum. Eventually, the researchers hope to build on the work of David Kaplan, a Tufts professor of biomedical engineering who has pioneered the creation of tough, flexible materials based on spider silk so that the creatures would be largely biodegradable.
Aside from the dissected caterpillar on the table, there is just one other completed model, but it is inert, having pulled a muscle, as well as bits of squirmy this and that. The research, which is financed by the W. M. Keck Foundation, “is very preliminary,” Dr. Trimmer admits.
The researchers have gotten a wave to propagate across a robot’s body; that wave picks up the feet in a way that already resembles the foot motion of a real caterpillar. By the end of the year, they hope to have robots capable of full locomotion that emulate the action of the caterpillars, he said. The puzzle of coming up with computer code to coordinate the movements, they suggest, will be greatly accelerated using the rapid trial-and-error approach that, in the world of computing, is called genetic algorithms.
They see a day when the cheaply built machines — less than a dollar apiece, Dr. Kaplan predicts — could be crammed into a canister and shot into a minefield. The hollow bodies would contain a simple power source and mine detectors; the caterpillars would wriggle across the terrain at random, stopping when they detect a likely mine. “There’s no need for high speed,” Dr. Kaplan said. “Slow and steady is fine.”
The team suggests the caterpillars could similarly be used in hazardous, hard-to-reach spots in nuclear reactors and spacecraft. They also see a role in internal medicine as crawling probes and sensors, a prospect that patients might find difficult to accept, but which might eventually help doctors navigate some of the body’s trickier passageways.
In trying to reproduce the caterpillar, the Tufts researchers are taking part in one of the biggest trends in robotics and locomotion studies, which are increasingly taking inspiration from the world of biology. Joseph Ayers of Northeastern University has created an artificial lobster. Ian Walker of Clemson University has a robotic arm that draws its inspiration from the elephant’s trunk and the octopus’s arm. There are robotic salamanders, snakes, cockroaches, fish and geckos.
“It’s a hot topic,” said Auke Jan Ijspeert, head of the Biologically Inspired Robotics Group at the Swiss Federal Institute of Technology. He created the “salamandra robotica” as a way to prove theories about how a salamander makes its twisting course. “I’m basically amazed by nature,” he said, “and how impressive animals are at solving the problem of locomotion control.”
Dr. Ijspeert called the Tufts project “very exciting, very new.” At the same time, he pointed out, animal models can only go so far. He tells his students, “You should not aim at blindly imitating nature.”
Evolution, the engine of development, is brilliant but not necessarily efficient, he said. “The biological solution is always a little bit messy — it’s based on previous systems,” he said. Working without precedent, he said with a touch of hubris, a researcher can shape a more elegant solution than nature has.
That blended approach helps to guide researchers at the Information Sciences Institute at the University of Southern California, where the “Superbot” can configure itself to squirm like a worm, crawl like a turtle, roll like a tire and more. Their work is financed in part by NASA, which wants robots for future missions that can get around in any number of ways. “Wheels are a marvelous invention, but they have their place,” said Peter Will, a U.S.C. researcher. Bicycling on sand is hard, he said, while walking on it is easy.
The U.S.C. institute has another layer of natural influence, said Wei-Min Shen, the director of the institute’s polymorphic robotics laboratory: communication among robots that is inspired by hormones . “Because there’s no fixed brain, we need a signal that will circulate within the system,” he said. Like Dr. Ijspeert, Dr. Shen says nature can be a bit limiting. When he has to get up to write on a whiteboard, he says, he thinks, “If I can reconfigure myself and take off my left arm and connect it to my right arm, I can do so without standing up or leaving my chair.”
Robotics researchers talk like that.
The road ahead could be far longer and more difficult than it seems today for the Tufts researchers, said Dr. Walker of Clemson. While expressing enthusiasm for the project, he cautioned that soft is hard. When he embarked on a project to create the flexible robotic arm more than five years ago, he had hoped to stick with soft materials. But “it’s very hard to engineer with all-soft components,” he said. “We had to make compromises along the way” to get the strength and force that the arm needed.
There were other challenges. Designing computer programs to operate the arm was something “we thought we would spend six months working out,” he recalled. “Two or three Ph.D. theses later, we finally understood the problem. Not that we had solved the problem. We understood the problem.”
“So,” he said, “I’ll be interested to see how closely, when all is said and done, the Tufts group are able to meet their goals.”
Dr. Trimmer said, “I’ve got a lot of confidence it will work.” And besides, he added, “If we don’t make the exact robot that you and I are discussing, what have we got? What we’ve got is an entirely new approach to motion control.”
2. Full-Mental Nudity
The arrival of mind-reading machines.
By William Saletan/Slate
Years ago, Woody Allen used to joke that he'd been thrown out of college as a freshman for cheating on his metaphysics final. "I looked within the soul of the boy sitting next to me," he confessed.
Today, the joke is on us. Cameras follow your car, GPS tracks your cell phone, software monitors your Web surfing, X-rays explore your purse, and airport scanners see through your clothes. Now comes the final indignity: machines that look into your soul.
With the aid of functional magnetic resonance imaging, neuroscientists have been hard at work on Allen's fantasy. Under controlled conditions, they can tell from a brain scan which of two images you're looking at. They can tell whether you're thinking of a face, an animal, or a scene. They can even tell which finger you're about to move.
But those feats barely scratch the brain's surface. Any animal can perceive objects and move limbs. To plumb the soul, you need a metaphysician. John-Dylan Haynes, a brilliant researcher at Germany's Bernstein Center for Computational Neuroscience, is leading the way. His mission, according to the center, is to predict thoughts and behavior from fMRI scans.
Haynes, a former philosophy student, is going for the soul's jugular. He's trying to clarify the physical basis of free will. "Why do we shape intentions in this way or another way?" he wonders. "Your wishes, your desires, your goals, your plans—that's the core of your identity." The best place to look for that core is in the brain's medial prefrontal cortex, which, he points out, is "especially involved in the initiation of willed movements and their protection against interference."
To get a clear snapshot of free will, Haynes designed an experiment that would isolate it from other mental functions. No objects to interpret; no physical movements to anticipate or execute; no reasoning to perform. Participants were put in an fMRI machine and were told they would soon be shown the word "select," followed a few seconds later by two numbers. Their job was to covertly decide, when they saw the "select" cue, whether to add or subtract the unseen numbers. Then, they were to perform the chosen calculation and punch a button corresponding to the correct answer. The snapshot was taken right after the "select" cue, when they had nothing to do but choose addition or subtraction.
Until this experiment, which was reported last month in Current Biology , nobody had ever tried to take a picture of free will. One reason is that fMRI is too crude to distinguish one abstract choice from another. It can only show which parts of the brain are demanding blood oxygen. That's too coarse to distinguish the configuration of cells that signifies addition from the configuration that signifies subtraction. So, Haynes used software to help the computer recognize complex patterns in the data. To dissect human thought, the computer had to emulate it.
Each participant took the test more than 250 times, choosing independently in each trial. The computer then looked at a sample of the scans, along with the final answers that revealed what choices had actually been made. It calculated a pattern and used this pattern to predict, from each participant's remaining scans, his or her decisions in the corresponding trials. Haynes checked the predictions—add or subtract—against the participants' answers. The computer got it right 71 percent of the time.
I know what you're thinking: Why would anyone want a machine to read his mind? But imagine being paralyzed, unable to walk, type, or speak. Imagine a helmet full of electrodes, or a chip implanted in your head, that lets your brain tell your computer which key to press. Those technologies are already here . And why endure the agony of mental hunt-and-peck? Why not design computers that, like a smart secretary, can discern and execute even abstract intentions? That's what Haynes has in mind. You want to open a folder or an e-mail, and your computer does it. Your wish is its command.
But if machines can read your mind when you want them to, they can also read it when you don't. And your will isn't necessarily the one they obey. Already, scans have been used to identify brain signatures of disgust ,drug cravings ,unconscious racism, and suppressed sexual arousal, not to mention psychopathy and propensity to kill.
Haynes understands the objection to these scans—he calls it "mental privacy"—but he buys only half of it. He doesn't like the idea of companies scanning job applicants for loyalty or scanning customers for reactions to products (an emerging practice known as neuromarketing ). But where criminal justice is at stake, as in the case of lie detection, he's for using the technology. Ruling it out, he argues, would "deny the innocent people the ability to prove their innocence" and would "only protect the people who are guilty."
I hear what he's saying. I'd love to have put Khalid Sheikh Mohammed through an fMRI before Sept. 11, 2001, instead of waiting six years for his confession. And I wish we'd scanned Mohamed Atta's brain before he boarded that flight out of Boston. But what Haynes is saying—and exposing—is almost more terrifying than terrorism. The brain is becoming just another accessible body part, searchable for threats and evidence. We can sift through your belongings, pat you down, study your nude form through your clothes, inspect your body cavities, and, if necessary, peer into your mind.
FMRI is just the first stage. Electrodes, infrared spectroscopy, and subtler magnetic imaging are next . Scanners will shrink. Image resolution and pattern-recognition software will improve.
But don't count out free will. To make human choice predictable, you first have to constrain it so that it's not really free. That's why Haynes confined his participants to arithmetic, gave them only two options, and forbade them to change their minds. They could have wrecked his experiment by defying any of those conditions. So could you, if somebody came at you with a scanner or an electrode helmet. To look into your soul and get the right answer, science, too, has to cheat. Somewhere, Woody Allen is laughing. I can feel it.
(William Saletan is Slate 's national correspondent and author of Bearing Right: How Conservatives Won the Abortion War)
3. Pentagon Preps Mind Fields
By Noah Shachtman/Wired
The U.S. military is working on computers than can scan your mind and adapt to what you're thinking.
Since 2000, Darpa, the Pentagon's blue-sky research arm, has spearheaded a far-flung, nearly $70 million effort to build prototype cockpits, missile control stations and infantry trainers that can sense what's occupying their operators' attention, and adjust how they present information, accordingly. Similar technologies are being employed to help intelligence analysts find targets easier by tapping their unconscious reactions. It's all part of a broader Darpa push to radically boost the performance of American troops.
"Computers today, you have to learn how they work," says Navy Commander Dylan Schmorrow, who served as Darpa's first program manager for this Augmented Cognition project. He now works for the Office of Naval Research. "We want the computer to learn you, adapt to you."
So much of what's done today in the military involves staring at a computer screen -- parsing an intelligence report, keeping track of fellow soldiers, flying a drone airplane -- that it can quickly lead to information overload. Schmorrow and other Augmented Cognition (AugCog) researchers think they can overcome this, though.
The idea -- to grossly over-simplify -- is that people have more than one kind of working memory, and more than one kind of attention; there are separate slots in the mind for things written, things heard and things seen. By monitoring how taxed those areas of the brain are, it should be possible to change a computer's display, to compensate. If a person's getting too much visual information, send him a text alert. If that person is reading too much at once, present some of the data visually -- in a chart or map.
At Boeing Phantom Works, researchers are using AugCog technologies to design tomorrow's cockpits. The military expects its pilots to someday control entire squads of armed robotic planes. But supervising all those drones may be too much for one human mind to handle unassisted.
Boeing's prototype controller uses an fMRI to check just how overloaded a pilot's visual and verbal memories are. Then the system adjusts its interface -- popping the most important radar images up on the middle of the screen, suggesting what targets should be hit next and, eventually, taking over for the human entirely, once his brain becomes completely overwhelmed.
Honeywell took a similar approach in recent trials, helping test subjects navigate through a simulated urban battle zone. They avoided enemy ambushes and evacuated wounded colleagues, all while a stream of messages poured on to their handheld computers. EEG meters, attached to subjects' heads, slowed the messages down when the subjects became overwhelmed. Average medical evacuation times were sped up by more than 300 percent, and ambushes dropped by more than 380 percent, as a result.
Zack Lynch, executive director of the Neurotechnology Industry Organization, says he's a bit suspicious of the claims because the improvements sound almost too dramatic. But "all in all, there are clearly tremendous advances" being made under the AugCog program, he notes in an e-mail. "(That progress) will bring benefits well outside the defense community," he says. "All you have to do is imagine what Wall Street will do when they get their hands on technology that can increase trading performance."
The Boeing and Honeywell teams were two of many groups presenting last October in San Francisco, when 75 or so neuroscientists, human-computer interface specialists and military researchers gathered in a Union Square hotel for the second Augmented Cognition International conference. (Other gadgetry included a morphing, brain-monitoring Tomahawk missile controller, a software assistant for a ship's captain, and a next-gen simulator of Marine squads.)
Schmorrow, a skinny, affable, mile-a-minute Navy pilot with five graduate degrees from computer science to experimental psychology, served as both emcee and the ringmaster, buzzing around the conference room. Schmorrow's vision is "AugCog everywhere" -- alarm clocks that sense where you are in your sleep cycle, Blackberries that don't vibrate when you're in a meeting.
"With technology, we're constantly interrupting people, burdening people," Schmorrow explains. "My phone is ringing, my Blackberry is buzzing, I've gotten 20 e-mails since we started talking. We just want people to be able to focus. Give them a bit of peace."
Early in life, Schmorrow didn't see himself as a military man. He was a long hair, playing in a "techno-industrial" band, until his grandmother, a World War II nurse, convinced him to apply to the Navy as a Christmas present to her. The recruiter told him he could see the world, study what he wanted and fly jets. Away he went.
Almost immediately, Schmorrow went into automation and cockpit design, as well as simulators meant to replicate pulling 10 Gs. . Eventually, he connected with University of Virginia psychology professor Denny Proffitt , and they began to brainstorm.
"We began with the idea that there was too much information out there these days for anyone to comprehend," says Schmorrow. "So how can we present it in a way that people will remember? Proffitt tells me, 'And wouldn't it be even better if we could figure out what people were doing, what they were thinking, so we could present them with the right things?'"
Schmorrow took the idea to Darpa. In late 2000, the agency put him in charge of a new program in Augmented Cognition. Initially, the goal was to figure out how to monitor brain activity while it was happening -- and then have that affect a computer's display of information.
By the summer of 2003, in tests at the Navy's Space and Naval Warfare Systems Center near San Diego, they pulled it off. The next phase was even more ambitious: Schmorrow's researchers had to get that adaptive unit to work well enough to boost a user's working memory by as much as 500 percent. That launched more complicated experiments -- like Boeing's killer drone cockpit.
Now, more than six years into the program, Darpa's involvement in Augmented Cognition program has mostly wound down. But the other military services -- as well as academic and corporate labs -- have picked up on the agency's efforts.
The work is far from over. In some scenarios, Boeing's AugCog controller shows only minor performance improvements over a more standard approach. At times, Honeywell's test subjects did their jobs more slowly when rigged up the EEGs. Other AugCog demonstrations I saw were rudimentary, like the Navy first-person shooter that sends more lumbering bad guys in your direction if your heart rate drops. (Not that the game ever gets that challenging.) But the basic building blocks of such a system -- sensors that can monitor brain activity while it's happening, and algorithms to let a computer respond -- have now been put into place, because of the Darpa kickstart.
"We had this crazy notion," Schmorrow says, "and now it's real. It may take five years, or 10, to get into the field. But it's real."
Meanwhile, Darpa has started a new program, based largely on the same sensing technologies developed for AugCog: Neurotechnology for Intelligence Analysts. Even the best parsers of satellite imagery often miss the terrorist hideouts or missile silos hidden in the pictures taken from orbit. In tests, the Darpa project is improving these intelligence officers' accuracy as much as 600 percent. The secret is tapping into their unconscious minds.
The brain's visual memory centers fire about 250 to 400 milliseconds after someone spots a target -- even if he doesn't realize what he's seen. Using infrared, magnetic and electrical sensors, researchers at Honeywell and the Oregon Health and Science University were able to use those unwitting neural spikes to pick likely "hot spots" in a satellite picture, where targets might be.
In one experiment, image arrays that usually took an experienced analyst an hour to scan were handled in 10 minutes. In another test, a smaller set of images that took about eight-and-a-half minutes to pore over, unaided, was scanned in about 80 seconds.
If these kinds of results can be repeated consistently, it could be a major advance. Satellite surveillance is on the rise -- and there aren't enough analysts to keep up with the work. If a neurotech system like this can do a basic triage of the images first, the chances of finding the pictorial equivalent of a needle in a haystack dramatically increase.
Researchers have been trying all kinds of ways to boost this rate. In the end, it may turn out, as Darpa officials note, that "the human visual system is still the best target detection apparatus" there is.
4. Brain Teasers
Games Without Frontiers
By Clive Thompson/Wired
A while ago, the science writer Steven Johnson was looking at an old IQ test known as the "Raven Progressive Matrices." Developed in the 1930s, it shows you a set of geometric shapes and challenges you to figure out the next one in the series. It's supposed to determine your ability to do abstract reasoning, but as Johnson looked at the little cubic Raven figures, he was struck by something: They looked like Tetris.
A light bulb went off. If Tetris looked precisely like an IQ test, then maybe playing Tetris would help you do better at intelligence tests. Johnson spun this conceit into his brilliant book of last year, Everything Bad Is Good For You, in which he argued that video games actually make gamers smarter. With their byzantine key commands, obtuse rule-sets and dynamic simulations of everything from water physics to social networks, Johnson argued, video games require so much cognitive activity that they turn us into Baby Einsteins -- not dull robots.
I loved the book, but it made me wonder: If games can inadvertently train your brain, why doesn't someone make a game that does so intentionally?
I should have patented the idea. Next month, Nintendo is releasing Brain Age, a DS game based on the research of the Japanese neuroscientist Ryuta Kawashima. Kawashima found that if you measured the brain activity of someone who was concentrating on a single, complex task -- like studying quantum theory -- several parts of that person's brain would light up. But if you asked them to answer a rapid-fire slew of tiny, simple problems -- like basic math questions -- her or his brain would light up everywhere.
Hence the design of Brain Age. It offers you nine different tests, some of which seem incredibly basic -- like answering flash-card math questions -- and others which are fiendishly tricky. At one point, the DS shows flashes a grid of numbers for one second, then hides the digits; you have to try to remember where they were located in the grid, in ascending order. After you've played a few rounds, the DS calculates your "brain age": How mentally nimble you are, compared to the statistical averages of other people Kawashima measured. Age 20 is the best you can do -- the apex of your mental powers, apparently -- and by playing Brain Age every day, you can become mentally younger and younger.
Now, the science here is a little dubious. The idea of a discrete brain age is about as phrenologically suspect as the increasingly-disputed concept of IQ itself. Kawashima believes you improve your cognition by getting your brain to light up all over at once. But not all neuroscientists agree that this full-brain activity means you're thinking more intelligently.
I'm quibbling, though. The truth is, scientists have long known that you can get smarter and stay smarter by engaging in daily, brain-teasing activity -- and Brain Age certainly qualifies.
Indeed, for something that doesn't even seem like normal "game," it's weirdly addictive. The math questions had me so frazzled that I emotionally regressed to about age ten. Brain Age also includes a Stroop test, which flashes the names of colors on screen in mismatched ink -- for example, the word "blue" printed in red -- and challenges you to name the color of the ink. As any psychologist will tell you, you can keep a lid on things for the first dozen words, but then your brain turns to jelly. My adrenaline was pumping harder than the first time I faced The Flood in Halo .
Plus, when a game actually judges your intellect? Man, that hits home. After my first round, Brain Age claimed I possessed the mind of a 68-year-old, and I nearly wept. I frantically plinked away at math tests for two hours until I got my score down to 33.
I had much the same response to PQ: Practical Intelligence Quotient , another brain-training game released in December. It plays much more like a regular platform-puzzler: You control a little man who inhabits a Tron-like, glowing grid-world composed of cubes. You move cubes into various configurations, which purportedly tests your "planning ability"; meanwhile, you trip a series of switches to open doors, which flexes your logical thinking.
PQ is hard: It plays like the most hellish Tomb Raider level you ever encountered. Indeed, with its spare, geometric shapes, PQ feels like the ur-game that lurks inside all other games -- puzzle-solving boiled down to its Platonic essence. Strip away all the medieval garb, gibbering monsters and postapocalyptic dungeons from most RPGs and stealth games, and you'd have something that looks pretty much like PQ .
Which is precisely Steven Johnson's point. Beneath the surface of every game, there's a gymnasium for your mind.
It would be pretty hilarious if games took seriously their role as cognitive food, and, like boxes of cereal, began proclaiming their nutritional value: "This game will stimulate your prefrontal cortex 500 percent more than an episode of Everybody Loves Raymond and 75 percent more than reading The Washington Post !!" But of course, the very fact that we still ruminate on whether games make you smarter or dumber is a symptom of how games are still coming of age in our mediasphere. Nobody sits around debating whether the act of reading stimulates your mind, after all.
But if you'll excuse me now, I've got to get back to some mental exercise. By this time tomorrow, I should be 24 years old.
0 Comments:
Post a Comment
<< Home