Kamis, 02 Desember 2010

Discovery at Young Star Hints Magnetism Common to All Cosmic Jets

Astronomers have found the first evidence of a magnetic field in a jet of material ejected from a young star, a discovery that points toward future breakthroughs in understanding the nature of all types of cosmic jets and of the role of magnetic fields in star formation.
IRAS 18162-2048
Radio-Infrared Image of IRAS 18162-2048

Radio jets emitted by young star shown in yellow
on background of infrared image from Spitzer
Space Telescope. Yellow bars show orientation of
magnetic field in jet as measured by VLA. Green bars
show magnetic-field orientation in the dusty envelope
surrounding the young star. Two other young stars are
seen at sides of the jet.
CREDIT: Carrasco-Gonzalez et al., Curran et al.,
Bill Saxton, NRAO/AUI/NSF, NASA


Throughout the Universe, jets of subatomic particles are ejected by three phenomena: the supermassive black holes at the cores of galaxies, smaller black holes or neutron stars consuming material from companion stars, and young stars still in the process of gathering mass from their surroundings. Previously, magnetic fields were detected in the jets of the first two, but until now, magnetic fields had not been confirmed in the jets from young stars.
"Our discovery gives a strong hint that all three types of jets originate through a common process," said Carlos Carrasco-Gonzalez, of the Astrophysical Institute of Andalucia Spanish National Research Council (IAA-CSIC) and the National Autonomous University of Mexico (UNAM).
The astronomers used the National Science Foundation's Very Large Array (VLA) radio telescope to study a young star some 5,500 light-years from Earth, called IRAS 18162-2048. This star, possibly as massive as 10 Suns, is ejecting a jet 17 light-years long. Observing this object for 12 hours with the VLA, the scientists found that radio waves from the jet have a characteristic indicating they arose when fast-moving electrons interacted with magnetic fields. This characteristic, called polarization, gives a preferential alignment to the electric and magnetic fields of the radio waves.
"We see for the first time that a jet from a young star shares this common characteristic with the other types of cosmic jets," said Luis Rodriguez, of UNAM.
The discovery, the astronomers say, may allow them to gain an improved understanding of the physics of the jets as well as of the role magnetic fields play in forming new stars. The jets from young stars, unlike the other types, emit radiation that provides information on the temperatures, speeds, and densities within the jets. This information, combined with the data on magnetic fields, can improve scientists' understanding of how such jets work.
"In the future, combining several types of observations could give us an overall picture of how magnetic fields affect the young star and all its surroundings. This would be a big advance in understanding the process of star formation," Rodriguez said.
Carrasco-Gonzalez and Rodriguez worked with Guillem Anglada and Mayra Osorio of the Astrophysical Institute of Andalucia, Josep Marti of the University of Jaen in Spain, and Jose Torrelles of the University of Barcelona. The scientists reported their findings in the November 26 edition of Science.

Rabu, 01 Desember 2010

Making Stars: Studies Show How Cosmic Dust and Gas Shape Galaxy Evolution

This series of images shows a simulation of galaxy formation 
occurring early in the history of the universe. The simulation was 
performed by Fermilab’s Nickolay Gnedin and the University of Chicago’s 
Andrey Kravtsov at the National Center for Supercomputing Applications 
in Urbana–Champaign. Yellow dots are young stars. Blue fog shows the 
neutral gas. Red surface indicates molecular gas. The starry background 
has been added for aesthetic effect. This series of images shows a simulation of galaxy formation occurring early in the history of the universe. The simulation was performed by Fermilab’s Nickolay Gnedin and the University of Chicago’s Andrey Kravtsov at the National Center for Supercomputing Applications in Urbana–Champaign. Yellow dots are young stars. Blue fog shows the neutral gas. Red surface indicates molecular gas. The starry background has been added for aesthetic effect.
(Nick Gnedin)

Astronomers find cosmic dust annoying when it blocks their view of the heavens, but without it the universe would be devoid of stars. Cosmic dust is the indispensable ingredient for making stars and for understanding how primordial diffuse gas clouds assemble themselves into full–blown galaxies.
“Formation of galaxies is one of the biggest remaining questions in astrophysics,” said Andrey Kravtsov, associate professor in astronomy & astrophysics at the University of Chicago.
Astrophysicists are moving closer to answering that question, thanks to a combination of new observations and supercomputer simulations, including those conducted by Kravtsov and Nick Gnedin, a physicist at Fermi National Accelerator Laboratory.
Gnedin and Kravtsov published new results based on their simulations in the May 1, 2010 issue of The Astrophysical Journal, explaining why stars formed more slowly in the early history of the universe than they did much later. The paper quickly came to the attention of Robert C. Kennicutt Jr., director of the University of Cambridge’s Institute of Astronomy and co–discoverer of one of the key observational findings about star formation in galaxies, known as the Kennicutt–Schmidt relation.
In the June 3, 2010 issue of Nature, Kennicutt noted that the recent spate of observations and theoretical simulations bodes well for the future of astrophysics. In their Astrophysical Journal paper, Kennicutt wrote, “Gnedin and Kravtsov take a significant step in unifying these observations and simulations, and provide a prime illustration of the recent progress in the subject as a whole.”
Star–formation law
Kennicutt’s star–formation law relates the amount of gas in galaxies in a given area to the rate at which it turns into stars over the same area. The relation has been quite useful when applied to galaxies observed late in the history of the universe, but recent observations by Arthur Wolfe of the University of California, San Diego, and Hsiao–Wen Chen, assistant professor in astronomy and astrophysics at UChicago, indicate that the relation fails for galaxies observed during the first two billion years following the big bang.
Gnedin and Kravtsov’s work successfully explains why. “What it shows is that at early stages of evolution, galaxies were much less efficient in converting their gas into stars,” Kravtsov said.
Stellar evolution leads to increasing abundance of dust, as stars produce elements heavier than helium, including carbon, oxygen, and iron, which are key elements in dust particles.
“Early on, galaxies didn’t have enough time to produce a lot of dust, and without dust it’s very difficult to form these stellar nurseries,” Kravtsov said. “They don’t convert the gas as efficiently as galaxies today, which are already quite dusty.”
The star–formation process begins when interstellar gas clouds become increasingly dense. At some point the hydrogen and helium atoms start combining to form molecules in certain cold regions of these clouds. A hydrogen molecule forms when two hydrogen atoms join. They do so inefficiently in empty space, but find each other more readily on the surface of a cosmic dust particle.
“The biggest particles of cosmic dust are like the smallest particles of sand on good beaches in Hawaii,” Gnedin said.
These hydrogen molecules are fragile and easily destroyed by the intense ultraviolet light emitted from massive young stars. But in some galactic regions dark clouds, so–called because of the dust they contain, form a protective layer that protects the hydrogen molecules from the destructive light of other stars.
Stellar nurseries
“I like to think about stars as being very bad parents, because they provide a bad environment for the next generation,” Gnedin joked. The dust therefore provides a protective environment for stellar nurseries, Kravtsov noted.
“There is a simple connection between the presence of dust in this diffuse gas and its ability to form stars, and that’s something that we modeled for the first time in these galaxy–formation simulations,” Kravtsov said. “It’s very plausible, but we don’t know for sure that that’s exactly what’s happening.”
The Gnedin–Kravtsov model also provides a natural explanation for why spiral galaxies predominately fill the sky today, and why small galaxies form stars slowly and inefficiently.
“We usually see very thin disks, and those types of systems are very difficult to form in galaxy–formation simulations,” Kravtsov said.
That’s because astrophysicists have assumed that galaxies formed gradually through a series of collisions. The problem: simulations show that when galaxies merge, they form spheroidal structures that look more elliptical than spiral.
But early in the history of the universe, cosmic gas clouds were inefficient at making stars, so they collided before star formation occurred. “Those types of mergers can create a thin disk,” Kravtsov said.
As for small galaxies, their lack of dust production could account for their inefficient star formation. “All of these separate pieces of evidence that existed somehow all fell into one place,” Gnedin observed. “That’s what I like as a physicist because physics, in general, is an attempt to understand unifying principles behind different phenomena.”
More work remains to be done, however, with input from newly arrived postdoctoral fellows at UChicago and more simulations to be performed on even more powerful supercomputers. “That’s the next step,” Gnedin said.

UH Physicists Study Behavior of Enzyme Linked to Alzheimer's, Cancer

Margaret Cheung, assistant professor of physics at UH, and Antonios Samiotakis, a physics Ph.D. student, described their findings in a paper titled “Structure, function, and folding of phosphoglycerate kinase (PGK) are strongly perturbed by macromolecular crowding,” published in a recent issue of the journal Proceedings of the National Academy of Sciences, one of the world’s most-cited multidisciplinary scientific serials. The research was funded by a nearly $224,000 National Science Foundation grant in support of Samiotakis’ dissertation.

“Imagine you’re walking down the aisle toward an exit after a movie in a crowded theatre. The pace of your motion would be slowed down by the moving crowd and narrow space between the aisles. However, you can still maneuver your arm, stretch out and pat your friend on the shoulder who slept through the movie,” Cheung said. “This can be the same environment inside a crowded cell from the viewpoint of a protein, the workhorse of all living systems. Proteins always ‘talk’ to each other inside cells, and they pass information about what happens to the cell and how to respond promptly. Failure to do so may cause uncontrollable cell growth that leads to cancer or cause malfunction of a cell that leads to Alzheimer’s disease. Understanding a protein inside cells – in terms of structures and enzymatic activity – is important to shed light on preventing, managing or curing these diseases at a molecular level.”

Cheung, a theoretical physicist, and Martin Gruebele, her experimental collaborator at the University of Illinois at Urbana-Champaign, led a team that unlocked this mystery. Studying the PGK enzyme, Cheung used computer models that simulate the environment inside a cell. Biochemists typically study proteins in water, but such test tube research is limited because it cannot gauge how a protein actually functions inside a crowded cell, where it can interact with DNA, ribosomes and other molecules.

The PGK enzyme plays a key role in the process of glycolysis, which is the metabolic breakdown of glucose and other sugars that releases energy in the form of ATP. ATP molecules are basically like packets of fuel that power biological molecular motors. This conversion of food to energy is present in every organism, from yeast to humans. Malfunction of the glycolytic pathway has been linked to Alzheimer’s disease and cancer. Patients with reduced metabolic rates in the brain have been found to be at risk for Alzheimer’s disease, while out-of-control metabolic rates are believed to fuel the growth of malignant tumor cells.

Scientists had previously believed that a PGK enzyme shaped like Pac-Man had to undergo a dynamic hinge motion to perform its metabolic function. However, in the computer models mimicking the cell interior, Cheung found that the enzyme was already functioning in its closed Pac-Man state in the jam-packed surrounding. In fact, the enzyme was 15 times more active in the tight spaces of a crowded cell. This shows that in cell-like conditions the function of a protein is more active and efficient than in a dilute condition, such as a test tube. This finding can drastically transform how scientists view proteins and their behavior when the environment of a cell is taken into account.

“This work deepens researchers’ understanding of how proteins function, or don’t function, in real cell conditions,” Samiotakis said. “By understanding the impact of a crowded cell on the structure, dynamics of proteins can help researchers design efficient therapeutic means that will work better inside cells, with the goal to prevent diseases and improve human health.”

Cheung and Samiotakis’ computer simulations – performed using the supercomputers at the Texas Learning and Computation Center (TLC2) – were coupled with in vitro experiments by Gruebele and his team. Using the high-performance computing resources of TLC2 factored significantly in the success of their work.

“Picture having a type of medicine that can precisely recognize and target a key that causes Alzheimer’s or cancer inside a crowded cell. Envision, then, the ability to switch a sick cell like this back to its healthy form of interaction at a molecular level,” Cheung said. “This may become a reality in the near future. Our lab at UH is working toward that vision.”

Bacteria Use ‘Toxic Darts' to Disable Each Other, According to UCSB Scientists

(Santa Barbara, Calif.) –– In nature, it's a dog-eat-dog world, even in the realm of bacteria. Competing
Stephanie K. Aoki (front)  Elie J. Diner, David Low,  Christopher 
Hayes   (back, left to right)  credit: George Foulsham,   Office of 
Public Affairs, UCSB
Click for downloadable image
Stephanie K. Aoki (front)
Elie J. Diner, David Low,
Christopher Hayes
(back, left to right)
credit: George Foulsham,
Office of Public Affairs, UCSB

Illustration of contact dependent   growth inhibition (CDI)   
credit: Stephanie K. Aoki
Click for downloadable image
Illustration of contact dependent
growth inhibition (CDI)
credit: Stephanie K. Aoki

Image shows   CDI+ E. coli bacteria (green)   interacting with 
target bacteria   lacking a CDI system (red)  credit: Stephanie K. Aoki
Click for downloadable image
Image shows
CDI+ E. coli bacteria (green)
interacting with target bacteria
lacking a CDI system (red)
credit: Stephanie K. Aoki
bacteria use "toxic darts" to disable each other, according to a new study by UC Santa Barbara biologists. Their research is published in the journal Nature.
"The discovery of toxic darts could eventually lead to new ways to control disease-causing pathogens," said Stephanie K. Aoki, first author and postdoctoral fellow in UCSB's Department of Molecular, Cellular, and Developmental Biology (MCDB). "This is important because resistance to antibiotics is on the rise."
Second author Elie J. Diner, a graduate student in biomolecular sciences and engineering, said: "First we need to learn the rules of this bacterial combat. It turns out that there are many ways to kill your neighbors; bacteria carry a wide range of toxic darts."
The scientists studied many bacterial species, including some important pathogens. They found that bacterial cells have stick-like proteins on their surfaces, with toxic dart tips. These darts are delivered to competing neighbor cells when the bacteria touch. This process of touching and injecting a toxic dart is called "contact dependent growth inhibition," or CDI.
Some targets have a biological shield. Bacteria protected by an immunity protein can resist the enemy's disabling toxic darts. This immunity protein is called "contact dependent growth inhibition immunity." The protein inactivates the toxic dart.
The UCSB team discovered a wide variety of potential toxic-tip proteins carried by bacteria cells –– nearly 50 distinct types have been identified so far, according to Christopher Hayes, co-author an associate professor at MCDB. Each bacterial cell must also have immunity to its own toxic dart. Otherwise, carrying the ammunition would cause cell suicide.
Surprisingly, when a bacterial cell is attacked –– and has no immunity protein –– it may not die. However, it often ceases to grow. The cell is inactivated, inhibited from growth. Similarly, many antibiotics do not kill bacteria; they only prevent the bacteria from growing. Then the body flushes out the dormant cells.
Some toxic tips appear to function inside the targeted bacteria by cutting up enemy RNA so the cell can no longer synthesize protein and grow. Other toxic tips operate by cutting up enemy DNA, which prevents replication of the cell.
"Our data indicate that CDI systems are also present in a broad range of bacteria, including important plant and animal pathogens such as E. coli which causes urinary tract infections, and Yersinia species, including the causative agent of plague," said senior author David Low, professor of MCDB. "Bacteria may be using these systems to compete with one another in the soil, on plants, and in animals. It's an amazingly diverse world."
The team studied the bacteria responsible for soft rot in potatoes, called Dickeya dadantii. This bacteria also invades chicory leaves, chrisanthemums, and other vegetables and plants.
Funding for this research came from the National Science Foundation and the National Institutes of Health. The TriCounty Blood Bank also provided funding.
The research was performed in the Low and Hayes lab in MCDB. Important contributions were made Stephen J. Poole, associate professor in MCDB, and by Peggy Cotter's lab when she was with MCDB. Cotter has since moved to the University of North Carolina School of Medicine. Other co-authors include Claire t'Kint de Roodenbeke, research associate; Brandt R. Burgess, postdoctoral fellow; Bruce A. Braaten, research scientist; Alison M. Jones, technician; and Julia S. Webb, graduate student.

Antihydrogen Trapped for First Time

Physicists working at the European Organization for Nuclear Research (CERN) in Geneva, Switzerland, have succeeded in trapping antihydrogen — the antimatter equivalent of the hydrogen atom — a milestone that could soon lead to experiments on a form of matter that disappeared mysteriously shortly after the birth of the universe 14 billion years ago.
artist's rendering of simple octupole magnetic field is produced 
by eight bar magnets in a plane with their north and south polesAn octupole magnet was critical to trapping antihydrogen atoms. A simple octupole magnetic field is produced by eight bar magnets in a plane with their north and south poles arrayed radially to create a magnetic minimum at the center. The antihydrogen atom is trapped in the center because of its magnetic moment, which itself is equivalent to a tiny bar magnet. The bar magnets above and below the octupole plane in this artist's rendition represent the mirror magnets that keep the atoms from squirting out the ends of the trap. (Katie Bertsche)
The first artificially produced low energy antihydrogen atoms — consisting of a positron, or antimatter electron, orbiting an antiproton nucleus — were created at CERN in 2002, but until now the atoms have struck normal matter and annihilated in a flash of gamma-rays within microseconds of creation.
The ALPHA (Antihydrogen Laser PHysics Apparatus) experiment, an international collaboration that includes physicists from the University of California, Berkeley, and Lawrence Berkeley National Laboratory (LBNL), has now trapped 38 antihydrogen atoms, each for more than one-tenth of a second.
While the number and lifetime are insufficient to threaten the Vatican — in the 2000 novel and 2009 movie "Angels & Demons," a hidden vat of potentially explosive antihydrogen was buried under St. Peter's Basilica in Rome — it is a starting point for learning new physics, the researchers said.
"We are getting close to the point at which we can do some classes of experiments on the properties of antihydrogen," said Joel Fajans, UC Berkeley professor of physics, LBNL faculty scientist and ALPHA team member. "Initially, these will be crude experiments to test CPT symmetry, but since no one has been able to make these types of measurements on antimatter atoms at all, it's a good start."
CPT (charge-parity-time) symmetry is the hypothesis that physical interactions look the same if you flip the charge of all particles, change their parity — that is, invert their coordinates in space — and reverse time. Any differences between antihydrogen and hydrogen, such as differences in their atomic spectrum, automatically violate CPT, overthrow today's "standard model" of particles and their interactions, and may explain why antimatter, created in equal amounts during the universe's birth, is largely absent today.
The team's results were published online Nov. 17 in advance of print publication in the British journal Nature.
Antimatter, first predicted by physicist Paul Dirac in 1931, has the opposite charge of normal matter and annihilates completely in a flash of energy upon interaction with normal matter. While astronomers see no evidence of significant antimatter annihilation in space, antimatter is produced during high-energy particle interactions on earth and in some decays of radioactive elements. UC Berkeley physicists Emilio Segre and Owen Chamberlain created antiprotons in the Bevatron accelerator at the Lawrence Radiation Laboratory, now LBNL, in 1955, confirming their existence and earning the scientists the 1959 Nobel Prize in physics.
Slow antihydrogen was produced at CERN in 2002 thanks to an antiproton decelerator that slowed antiprotons enough for them to be used in experiments that combined them with a cloud of positrons. The ATHENA experiment, a broad international collaboration, reported the first detection of cold antihydrogen, with the rival ATRAP experiment close behind.
The ATHENA experiment closed down in 2004, to be superseded by ALPHA, coordinated by Jeffrey Hangst of the University of Aarhus in Denmark. Since then, the ALPHA and ATRAP teams have competed to trap antihydrogen for experiments, in particular, laser experiments to measure the antihydrogen spectrum (the color with which it glows) — and gravity measurements. Before the recent results, the CERN experiments have produced — only fleetingly — tens of millions of antihydrogen atoms, Fajans said.
ALPHA's approach was to cool antiprotons and compress them into a matchstick-size cloud (20 millimeters long and 1.4 millimeters in diameter). Then, using autoresonance, a technique developed by UC Berkeley visiting professor Lazar Friedland and first explored in plasmas by Fajans and former U.C Berkeley graduate student Erik Gilson, the cloud of cold, compressed antiprotons is nudged to overlap a like-size positron cloud, where the two particles mate to form antihydrogen.
Joel Fajans, professor of physics (Photo by Niels Madsen)
All this happens inside a magnetic bottle that traps the antihydrogen atoms. The magnetic trap is a specially configured magnetic field that Fajans and then-UC Berkeley undergraduate Andrea Schmidt first proposed, using an unusual and expensive octupole superconducting magnet to create a more stable plasma.
"For the moment, we keep antihydrogen atoms around for at least 172 milliseconds — about a sixth of a second — long enough to make sure we have trapped them," said colleague Jonathan Wurtele, UC Berkeley professor of physics and LBNL faculty scientist. Wurtele collaborated with LBNL visitor Katia Gomberoff, staff members Alex Friedman, David Grote and Jean-Luc Vay and with Fajans to simulate the new and original magnetic configurations.
Trapping antihydrogen isn't easy, Fajans said, because it is a neutral, or chargeless, particle. Magnetic bottles are generally used to trap charged particles, such as ionized atoms. These charged particles spiral along magnetic field lines until they encounter an electric field that bounces them back towards the center of the bottle.
Neutral antihydrogen, however, would normally be unaffected by these fields. But the team takes advantage of the tiny magnetic moment of the antihydrogen atom to trap it using a steeply increasing field — a so-called magnetic mirror — that reflects them backward toward the center. Because the magnetic moment is so small, the antihydrogen has to be very cold: less than about one-half degree above absolute zero (0.5 Kelvin). That means the team had to slow down the antiprotons by a factor of one hundred billion from their initial energy emerging from the antiproton decelerator.
Once trapped, the experimenters sweep out the lingering antiprotons with an electric field, then shut off the mirror fields and let the trapped antihydrogen atoms annihilate with normal matter. Surrounding detectors are sensitive to the charged pions that result from the proton-antiproton annihilation. Cosmic rays can also set off the detector, but their straight-line tracks can be easily distinguished, Fajans said. A few antiprotons could potentially remain in the trap, and their annihilations would look similar to those of antihydrogen, but the physicists' simulations show that such events can also be successfully distinguished from antihydrogen annihilations.
During August and September of 2010, the team detected an antihydrogen atom in 38 of the 335 cycles of antiproton injection. Given that their detector efficiency is about 50 percent, the team calculated that it captured approximately 80 of the several million antihydrogen atoms produced during these cycles. Experiments in 2009 turned up six candidate antihydrogen atoms, but they have not been confirmed.
ALPHA continues to detect antihydrogen atoms at an increasing rate as the experimenters learn how to better tune their experiment, Fajans said.
Of the 42 co-authors of the new paper, 10 are or were affiliated with UC Berkeley: Fajans; Wurtele; current graduate students Marcelo Baquero-Ruiz, Steve Chapman, Alex Povilus and Chukman So; former graduate student Will Bertsche; former sabbatical visitor Eli Sarid; and past visitors Daniel Silveira and Dirk van der Werf. Other UC Berkeley contributors to the research are former undergraduates Crystal Bray, Patrick Ko and Korana Burke, and former graduate student Erik Gilson. Other LBNL contributors include Alex Friedman, David Grote, Jean-Luc Vay and former visiting scientists Katia Gomberoff and Alon Deutsch.

Physicists Demonstrate a Four-Fold Quantum Memory

Caltech Physicists Demonstrate a Four-Fold Quantum Memory

PASADENA, Calif. — Researchers at the California Institute of Technology (Caltech) have demonstrated quantum entanglement for a quantum state stored in four spatially distinct atomic memories.
Their work, described in the November 18 issue of the journal Nature, also demonstrated a quantum interface between the atomic memories—which represent something akin to a computer "hard drive" for entanglement—and four beams of light, thereby enabling the four-fold entanglement to be distributed by photons across quantum networks. The research represents an important achievement in quantum information science by extending the coherent control of entanglement from two to multiple (four) spatially separated physical systems of matter and light.
The proof-of-principle experiment, led by William L. Valentine Professor and professor of physics H. Jeff Kimble, helps to pave the way toward quantum networks. Similar to the Internet in our daily life, a quantum network is a quantum "web" composed of many interconnected quantum nodes, each of which is capable of rudimentary quantum logic operations (similar to the "AND" and "OR" gates in computers) utilizing "quantum transistors" and of storing the resulting quantum states in quantum memories. The quantum nodes are "wired" together by quantum channels that carry, for example, beams of photons to deliver quantum information from node to node. Such an interconnected quantum system could function as a quantum computer, or, as proposed by the late Caltech physicist Richard Feynman in the 1980s, as a "quantum simulator" for studying complex problems in physics.
Quantum entanglement is a quintessential feature of the quantum realm and involves correlations among components of the overall physical system that cannot be described by classical physics. Strangely, for an entangled quantum system, there exists no objective physical reality for the system's properties. Instead, an entangled system contains simultaneously multiple possibilities for its properties. Such an entangled system has been created and stored by the Caltech researchers.
Previously, Kimble's group entangled a pair of atomic quantum memories and coherently transferred the entangled photons into and out of the quantum memories (http://media.caltech.edu/press_releases/13115). For such two-component—or bipartite—entanglement, the subsystems are either entangled or not. But for multi-component entanglement with more than two subsystems—or multipartite entanglement—there are many possible ways to entangle the subsystems. For example, with four subsystems, all of the possible pair combinations could be bipartite entangled but not be entangled over all four components; alternatively, they could share a "global" quadripartite (four-part) entanglement.
Hence, multipartite entanglement is accompanied by increased complexity in the system. While this makes the creation and characterization of these quantum states substantially more difficult, it also makes the entangled states more valuable for tasks in quantum information science.
The fluorescence from the four atomic ensembles. These ensembles are the four quantum memories that store an entangled quantum state.
[Credit: Nature/Caltech/Akihisa Goban]
To achieve multipartite entanglement, the Caltech team used lasers to cool four collections (or ensembles) of about one million Cesium atoms, separated by 1 millimeter and trapped in a magnetic field, to within a few hundred millionths of a degree above absolute zero. Each ensemble can have atoms with internal spins that are "up" or "down" (analogous to spinning tops) and that are collectively described by a "spin wave" for the respective ensemble. It is these spin waves that the Caltech researchers succeeded in entangling among the four atomic ensembles.
The technique employed by the Caltech team for creating quadripartite entanglement is an extension of the theoretical work of Luming Duan, Mikhail Lukin, Ignacio Cirac, and Peter Zoller in 2001 for the generation of bipartite entanglement by the act of quantum measurement. This kind of "measurement-induced" entanglement for two atomic ensembles was first achieved by the Caltech group in 2005 (http://media.caltech.edu/press_releases/12776).
In the current experiment, entanglement was "stored" in the four atomic ensembles for a variable time, and then "read out"—essentially, transferred—to four beams of light. To do this, the researchers shot four "read" lasers into the four, now-entangled, ensembles. The coherent arrangement of excitation amplitudes for the atoms in the ensembles, described by spin waves, enhances the matter–light interaction through a phenomenon known as superradiant emission.
"The emitted light from each atom in an ensemble constructively interferes with the light from other atoms in the forward direction, allowing us to transfer the spin wave excitations of the ensembles to single photons," says Akihisa Goban, a Caltech graduate student and coauthor of the paper. The researchers were therefore able to coherently move the quantum information from the individual sets of multipartite entangled atoms to four entangled beams of light, forming the bridge between matter and light that is necessary for quantum networks.
The Caltech team investigated the dynamics by which the multipartite entanglement decayed while stored in the atomic memories. "In the zoology of entangled states, our experiment illustrates how multipartite entangled spin waves can evolve into various subsets of the entangled systems over time, and sheds light on the intricacy and fragility of quantum entanglement in open quantum systems," says Caltech graduate student Kyung Soo Choi, the lead author of the Nature paper. The researchers suggest that the theoretical tools developed for their studies of the dynamics of entanglement decay could be applied for studying the entangled spin waves in quantum magnets.
Further possibilities of their experiment include the expansion of multipartite entanglement across quantum networks and quantum metrology. "Our work introduces new sets of experimental capabilities to generate, store, and transfer multipartite entanglement from matter to light in quantum networks," Choi explains. "It signifies the ever-increasing degree of exquisite quantum control to study and manipulate entangled states of matter and light."
In addition to Kimble, Choi, and Goban, the other authors of the paper, "Entanglement of spin waves among four quantum memories," are Scott Papp, a former postdoctoral scholar in the Caltech Center for the Physics of Information now at the National Institute of Standards and Technology in Boulder, Colorado, and Steven van Enk, a theoretical collaborator and professor of physics at the University of Oregon, and an associate of the Institute for Quantum Information at Caltech.
This research was funded by the National Science Foundation, the National Security Science and Engineering Faculty Fellowship program at the U.S. Department of Defense (DOD), the Northrop Grumman Corporation, and the Intelligence Advanced Research Projects Activity.

Email this pagePrint this page Bookmark and Share News From the Field Pushing Black-hole Mergers to the Extreme: RIT Scientists Achieve 100:1 Mass Ratio

‘David and Goliath’ scenario explores extreme mass ratios (Goliath wins)

Scientists have simulated, for the first time, the merger of two black holes of vastly different sizes, with one mass 100 times larger than the other. This extreme mass ratio of 100:1 breaks a barrier in the fields of numerical relativity and gravitational wave astronomy.
Until now, the problem of simulating the merger of binary black holes with extreme size differences had remained an unexplored region of black-hole physics.
“Nature doesn’t collide black holes of equal masses,” says Carlos Lousto, associate professor of mathematical sciences at Rochester Institute of Technology and a member of the Center for Computational Relativity and Gravitation. “They have mass ratios of 1:3, 1:10, 1:100 or even 1:1 million. This puts us in a better situation for simulating realistic astrophysical scenarios and for predicting what observers should see and for telling them what to look for.
“Leaders in the field believed solving the 100:1 mass ratio problem would take five to 10 more years and significant advances in computational power. It was thought to be technically impossible.”
“These simulations were made possible by advances both in the scaling and performance of relativity computer codes on thousands of processors, and advances in our understanding of how gauge conditions can be modified to self-adapt to the vastly different scales in the problem,” adds Yosef Zlochower, assistant professor of mathematical sciences and a member of the center.
A paper announcing Lousto and Zlochower’s findings was submitted for publication in Physical Review Letters.
The only prior simulation describing an extreme merger of black holes focused on a scenario involving a 1:10 mass ratio. Those techniques could not be expanded to a bigger scale, Lousto explained. To handle the larger mass ratios, he and Zlochower developed numerical and analytical techniques based on the moving puncture approach—a breakthrough, created with Manuela Campanelli, director of the Center for Computational Relativity and Gravitation, that led to one of the first simulations of black holes on supercomputers in 2005.
The flexible techniques Lousto and Zlochower advanced for this scenario also translate to spinning binary black holes and for cases involving smaller mass ratios. These methods give the scientists ways to explore mass ratio limits and for modeling observational effects.
Lousto and Zlochower used resources at the Texas Advanced Computer Center, home to the Ranger supercomputer, to process the massive computations. The computer, which has 70,000 processors, took nearly three months to complete the simulation describing the most extreme-mass-ratio merger of black holes to date.
“Their work is pushing the limit of what we can do today,” Campanelli says. “Now we have the tools to deal with a new system.”
Simulations like Lousto and Zlochower’s will help observational astronomers detect mergers of black holes with large size differentials using the future Advanced LIGO (Laser Interferometer Gravitational-wave Observatory) and the space probe LISA (Laser Interferometer Space Antenna). Simulations of black-hole mergers provide blueprints or templates for observational scientists attempting to discern signatures of massive collisions. Observing and measuring gravitational waves created when black holes coalesce could confirm a key prediction of Einstein’s general theory of relativity.

Sabtu, 20 November 2010

Nanogenerators Grow Strong Enough to Power Small Conventional ElectronicsNanogenerators Grow Strong Enough to Power Small Conventional Electronics

Blinking numbers on a liquid-crystal display (LCD) often indicate that a device’s clock needs resetting.  But in the laboratory of Zhong Lin Wang at Georgia Tech, the blinking number on a small LCD signals the success of a five-year effort to power conventional electronic devices with nanoscale generators that harvest mechanical energy from the environment using an array of tiny nanowires.
LCD powered by a nanogenerator
Compressing a nanogenerator between two fingers is enough to drive a liquid-crystal display. (Courtesy: Zhong Lin Wang)
In this case, the mechanical energy comes from compressing a nanogenerator between two fingers, but it could also come from a heartbeat, the pounding of a hiker’s shoe on a trail, the rustling of a shirt, or the vibration of a heavy machine.  While these nanogenerators will never produce large amounts of electricity for conventional purposes, they could be used to power nanoscale and microscale devices – and even to recharge pacemakers or iPods.
Wang’s nanogenerators rely on the piezoelectric effect seen in crystalline materials such as zinc oxide, in which an electric charge potential is created when structures made from the material are flexed or compressed.  By capturing and combining the charges from millions of these nanoscale zinc oxide wires, Wang and his research team can produce as much as three volts – and up to 300 nanoamps.
“By simplifying our design, making it more robust and integrating the contributions from many more nanowires, we have successfully boosted the output of our nanogenerator enough to drive devices such as commercial liquid-crystal displays, light-emitting diodes and laser diodes,” said Wang, a Regents’ professor in Georgia Tech’s School of Materials Science and Engineering.  “If we can sustain this rate of improvement, we will reach some true applications in healthcare devices, personal electronics, or environmental monitoring.”
Zhong Lin Wang with nanogenerator
Professor Zhong Lin Wang holds an earlier version of the nanogenerators developed using zinc oxide nanowires. (Click image for high-resolution version. Credit: Gary Meek)
Recent improvements in the nanogenerators, including a simpler fabrication technique, were reported online last week in the journal Nano Letters.  Earlier papers in the same journal and in Nature Communications reported other advances for the work, which has been supported by the Defense Advanced Research Projects Agency (DARPA), the U.S. Department of Energy, the U.S. Air Force, and the National Science Foundation.
“We are interested in very small devices that can be used in applications such as health care, environmental monitoring and personal electronics,” said Wang.  “How to power these devices is a critical issue.”
The earliest zinc oxide nanogenerators used arrays of nanowires grown on a rigid substrate and topped with a metal electrode.  Later versions embedded both ends of the nanowires in polymer and produced power by simple flexing.  Regardless of the configuration, the devices required careful growth of the nanowire arrays and painstaking assembly.
In the latest paper, Wang and his group members Youfan Hu, Yan Zhang, Chen Xu, Guang Zhu and Zetang Li reported on much simpler fabrication techniques.  First, they grew arrays of a new type of nanowire that has a conical shape.  These wires were cut from their growth substrate and placed into an alcohol solution.
Transferring nanowires
In a new technique for producing nanogenerators, researchers transfer vertically-aligned nanowires to a flexible substrate. (Courtesy of Zhong Lin Wang)
The solution containing the nanowires was then dripped onto a thin metal electrode and a sheet of flexible polymer film.  After the alcohol was allowed to dry, another layer was created.  Multiple nanowire/polymer layers were built up into a kind of composite, using a process that Wang believes could be scaled up to industrial production.
When flexed, these nanowire sandwiches – which are about two centimeters by 1.5 centimeters – generated enough power to drive a commercial display borrowed from a pocket calculator.
Wang says the nanogenerators are now close to producing enough current for a self-powered system that might monitor the environment for a toxic gas, for instance, then broadcast a warning.  The system would include capacitors able to store up the small charges until enough power was available to send out a burst of data.
While even the current nanogenerator output remains below the level required for such devices as iPods or cardiac pacemakers, Wang believes those levels will be reached within three to five years.  The current nanogenerator, he notes, is nearly 100 times more powerful than what his group had developed just a year ago.
Writing in a separate paper published in October in the journal Nature Communications, group members Sheng Xu, Benjamin J. Hansen and Wang reported on a new technique for fabricating piezoelectric nanowires from lead zirconate titanate – also known as PZT.  The material is already used industrially, but is difficult to grow because it requires temperatures of 650 degrees Celsius.
In the paper, Wang’s team reported the first chemical epitaxial growth of vertically-aligned single-crystal nanowire arrays of PZT on a variety of conductive and non-conductive substrates.  They used a process known as hydrothermal decomposition, which took place at just 230 degrees Celsius.
Transferring nanowires
In an improved technique for fabricating nanogenerators, researchers transfer vertical arrays of nanowires to a flexible substrate. (Credit: Inertia Films)
With a rectifying circuit to convert alternating current to direct current, the researchers used the PZT nanogenerators to power a commercial laser diode, demonstrating an alternative materials system for Wang’s nanogenerator family.  “This allows us the flexibility of choosing the best material and process for the given need, although the performance of PZT is not as good as zinc oxide for power generation,” he explained.
And in another paper published in Nano Letters, Wang and group members Guang Zhu, Rusen Yang and Sihong Wang reported on yet another advance boosting nanogenerator output.  Their approach, called “scalable sweeping printing,” includes a two-step process of (1) transferring vertically-aligned zinc oxide nanowires to a polymer receiving substrate to form horizontal arrays and (2) applying parallel strip electrodes to connect all of the nanowires together.
Using a single layer of this structure, the researchers produced an open-circuit voltage of 2.03 volts and a peak output power density of approximately 11 milliwatts per cubic centimeter.
“From when we got started in 2005 until today, we have dramatically improved the output of our nanogenerators,” Wang noted.  “We are within the range of what’s needed.  If we can drive these small components, I believe we will be able to power small systems in the near future.  In the next five years, I hope to see this move into application.”

Threshold Sea Surface Temperature for Hurricanes and Tropical Thunderstorms Is Rising

Scientists have long known that atmospheric convection in the form of hurricanes and tropical ocean thunderstorms tends to occur when sea surface temperature rises above a threshold. So how do rising ocean temperatures with global warming affect this threshold?  If the threshold does not rise, it could mean more frequent hurricanes. A new study by researchers at the International Pacific Research Center (IPRC) of the University of Hawaiʻi at Mānoa shows this threshold sea surface temperature for convection is rising under global warming at the same rate as that of the tropical oceans. Their paper appears in the Advance Online Publications of Nature Geoscience.
Tropical ocean thunderstorms. Image courtesy NASA
Tropical ocean thunderstorms. Image courtesy NASA
Average observed tropical (black) and estimated SST (blue) rose together in the last 30 years.
Average observed tropical (black) and estimated SST (blue) rose together in the last 30 years.
 
In order to detect the annual changes in the threshold sea surface temperature (SST) for convection,  Nat Johnson, a postdoctoral fellow at IPRC, and Shang-Ping Xie, a professor of meteorology at IPRC and UH Mānoa, analyzed satellite estimates of tropical ocean rainfall spanning 30 years. They find that changes in the threshold temperature for convection closely follow the changes in average tropical sea surface temperature, which have both been rising approximately 0.1°C per decade.   
“The correspondence between the two time series is rather remarkable,” says lead author Johnson. “The convective threshold and average sea surface temperatures are so closely linked because of their relation with temperatures in the atmosphere extending several miles above the surface.”
The change in tropical upper atmospheric temperatures has been a controversial topic in recent years because of discrepancies between reported temperature trends from instruments and the expected trends under global warming according to global climate models. The measurements from instruments have shown less warming than expected in the upper atmosphere. The findings of Johnson and Xie, however, provide strong support that the tropical atmosphere is warming at a rate that is consistent with climate model simulations.
 
“This study is an exciting example of how applying our knowledge of physical processes in the tropical atmosphere can give us important information when direct measurements may have failed us,” Johnson notes.
The study notes further that global climate models project that the sea surface temperature threshold for convection will continue to rise in tandem with the tropical average sea surface temperature. If true, hurricanes and other forms of tropical convection will require warmer ocean surfaces for initiation over the next century.

Sabtu, 23 Oktober 2010

Popular Mechanics Breakthrough Awardees Announced

Artificial retina technology, seismic fuses and cell phone microscopes among the winners
Image of a mixed-signal system-on-a-chip developed as a platform for implantable prosthetics.
A novel, mixed-signal system-on-a-chip was developed as a platform for implantable prosthetics.
View discussions on the artificial retina, future of the automobile, and development of award winning technologies.
Popular Mechanics has recognized three NSF-funded projects with innovation Breakthrough Awards: an artificial retina returning sight to those who have lost it; a system that uses "controlled rocking" and energy-dissipating fuses to help buildings withstand earthquakes; and an inexpensive medical microscope built for cell-phones that allows doctors in rural villages to identify malaria-infected blood cells.
Those projects, along with 16 others, are featured in the November 2010 issue of Popular Mechanics. The awardees share the issue with the 2010 Leadership Award winner, J. Craig Venter, who is recognized for his breakthroughs in genomics over the last decade.
The artificial retina technology, funded for decades by several NSF biotechnology and transformational research programs, is an experimental system that helps individuals suffering from either macular degeneration or retinitis pigmentosa. Led by University of Southern California engineers Mark Humayun and Wentai Liu, the collaborative team--involving academia, government and industry--has been testing the system with more than 25 individuals, enabling them to progress from total blindness to being able to see shapes and navigate their local surroundings.
The controlled-rocking frame, developed as part of NSF's George E. Brown, Jr. Network for Earthquake Engineering Simulation program, uses replaceable, structural fuses that sacrifice themselves when an earthquake strikes, preserving the buildings they protect. Developed by a team led by Gregory Deierlein of Stanford University and Jerome Hajjar of Northeastern University, the system's self-centering frames and fuses help prevent post-earthquake displacement and are designed for fast and easy repair following a major earthquake, ensuring that an affected building can be reoccupied quickly.
The cellular-phone microscope, also funded by NSF's biotechnology programs, uses no lenses, lowering bulk and cost. The device--developed by NSF CAREER awardee Aydogan Ozcan of the University of California, Los Angeles--focuses LED light onto a slide positioned directly over a cell phone's camera, and after interpretation by software, can differentiate details so clearly that malaria-infected blood cells stand out from healthy ones.
NSF award abstracts contain summaries of Mark Humayun's NSF-funded projects, and the technology is described in a Science Nation video segment. There are also abstracts related to Wentai Liu's NSF-funded projects as well as of NSF NEES projects, and of Aydogan Ozcan's NSF-funded projects.

Exploring Sustainability for Energy and Buildings

The National Science Foundation (NSF) Office of Emerging Frontiers in Research and Innovation (EFRI) has announced 14 grants for fiscal year (FY) 2010, awarding nearly $28 million to 62 investigators at 24 institutions.
Over the next four years, teams of researchers will pursue transformative, fundamental research in two areas of great national need: storing energy from renewable sources; and engineering sustainable buildings.
Energy generated from renewable sources has long promised to satisfy demands for more and cleaner electricity. Because renewable sources, such as sunlight and wind, can produce greatly fluctuating amounts of energy, they are most effectual when excess energy can be stored until it's needed.
EFRI research teams will pursue creative new approaches to making large-scale energy storage efficient and economical. They aim to construct capacitors and regenerative fuel cells with unprecedented capabilities to harness the sun's thermal energy, to produce chemical fuel on demand, and to trap off-shore wind as compressed air.
"These four projects take radically different approaches to storing excess energy from intermittent sources," said Geoffrey Prentice, lead EFRI program officer, "and success in any one of them could guide the development of new processes for large-scale energy storage."
A second set of EFRI research teams will investigate the critical flows and fluxes of buildings--power, heat, light, water, air and occupants--to create new paradigms for the design, construction, and operation of our homes and workplaces.
These researchers aim to improve the ability to predict and control building energy performance and environmental impacts, and to design systems that respond intelligently, in real-time, to changing conditions and to occupant input and needs. The investigations will pursue methods for reducing water consumption; for distributed, integrated approaches to renewable energy production, storage, and use; and for moderating temperature shifts through passive building technologies and systems.
"These awards are significant in the extent to which the research teams are multidisciplinary," said lead EFRI program officer Richard Fragaszy. Engineers, architects, and physical and social scientists are pooling their expertise to conduct the basic research needed to design and construct future homes and offices that will greatly reduce reliance on fossil fuels and demand for potable water, while improving the health and productivity of their occupants."
"These researchers are undertaking bold investigations in order to achieve major leaps in knowledge," said Sohi Rastegar, director of EFRI. "If they are successful, their findings have the potential to significantly impact global warming and promote U.S. energy independence."
The FY 2010 EFRI topics were developed in close collaboration with the NSF Directorates for Computer and Information Science and Engineering (CISE), Mathematical and Physical Sciences (MPS), and Social, Behavioral, and Economic Sciences (SBE), as well as with the U.S. Department of Energy (DOE) and U.S. Environment Protection Agency (EPA). DOE and EPA also contributed financial support to the EFRI SEED projects.
EFRI, established by the NSF Directorate for Engineering in 2007, seeks high-risk interdisciplinary research that has the potential to transform engineering and other fields.  The grants demonstrate the EFRI goal to inspire and enable researchers to expand the limits of our knowledge.

Decontaminating Dangerous Drywall

Nanomaterial in novel home-air treatment counters hazards from toxic drywall
Artist's interpretation of FAST-ACT absorbing and destroying toxins.
Artist's interpretation of FAST-ACT absorbing and destroying toxins.
A nanomaterial originally developed to fight toxic waste is now helping reduce debilitating fumes in homes with corrosive drywall.
Developed by Kenneth Klabunde of Kansas State University, and improved over three decades with support from the National Science Foundation, the FAST-ACT material has been a tool of first responders since 2003.
Now, NanoScale Corporation of Manhattan, Kansas--the company Klabunde co-founded to market the technology--has incorporated FAST-ACT into a cartridge that breaks down the corrosive drywall chemicals.
Homeowners have reported that the chemicals--particularly sulfur compounds such as hydrogen sulfide and sulfur dioxide--have caused respiratory illnesses, wiring corrosion and pipe damage in thousands of U.S. homes with sulfur-rich, imported drywall.
"It is devastating to see what has happened to so many homeowners because of the corrosive drywall problem, but I am glad the technology is available to help," said Klabunde. "We've now adapted the technology we developed through years of research for FAST-ACT for new uses by homeowners, contractors and remediators."
The new cartridge, called OdorKlenz®, takes the place of the existing air filter in a home. The technology is similar to one that NanoScale adapted in 2008 for use by a major national disaster restoration service company for odors caused by fire and water damage.
In homes with corrosive drywall, the cartridge is used in combination with related FAST-ACT-based, OdorKlenz® surface treatments (and even laundry additives) to remove the sulfur-bearing compounds causing the corrosion issues.
Developers at NanoScale tested their new air cartridge in affected homes that were awaiting drywall removal, and in every case, odor dropped to nearly imperceptible levels within 10 days or less and  corrosion was reduced.
The FAST-ACT material is a non-toxic mineral powder composed of the common elements magnesium, titanium and oxygen. While metal oxides similar to FAST-ACT have an established history tackling dangerous compounds, none have been as effective.
NanoScale's breakthrough was a new method to manufacture the compound as a nanocrystalline powder with extremely high surface area--only a few tablespoons have as much surface area as a football field.
The surface area allows more interactions between the metal oxides and the toxic molecules, enabling the powder to capture and destroy a large quantity of hazardous chemicals ranging from sulfuric acid to VX gas--and their hazardous byproducts--in minutes.
"The concept of nano-sized adsorbents as both a cost-efficient, useful product for first responders and an effective product for in-home use illustrates the wide spectrum of possibilities for this technology," said NSF program director Rosemarie Wesson, who oversaw NanoScale's NSF Small Business Innovation Resarch grants.  "It is great to see the original work we supported to help reduce the toxic effects of hazardous spills now expand into other applications."
In coming months, the company is proposing its technology for use in Gulf Coast residences affected by the recent oil spill and other hazardous situations where airborne toxins are causing harm.
In addition to extensive support from NSF, the development of FAST ACT and NanoScale's technology has been supported by grants from the U.S. Army, DTRA, Air Force, DARPA, JPEO, MARCORSYSCOM , the CTTSO, USSOCOM, NIOSH, DOE, NIH and EPA.

Transformation Optics Make a U-turn for the Better

Powerful new microscopes able to resolve DNA molecules with visible light, superfast computers that use light rather than electronic signals to process information, and Harry Potteresque invisibility cloaks are just some of the many thrilling promises of transformation optics. In this burgeoning field of science, light waves can be controlled at all lengths of scale through the unique structuring of metamaterials, composites typically made from metals and dielectrics – insulators that become polarized in the presence of an electromagnetic field. The idea is to transform the physical space through which light travels, sometimes referred to as “optical space,” in a manner similar to the way in which outer space is transformed by the presence of a massive object under Einstein’s relativity theory.
Schematic on the left shows the scattering of surface plasmon polaritons (SPPs) on a metal-dielectric interface with a single protrusion. Schematic on right shows how SPP  scattering is dramatically suppressed when the optical space around the protrusion is transformed. (Image courtesy of Zhang group)
Schematic on the left shows the scattering of surface plasmon polaritons (SPPs) on a metal-dielectric interface with a single protrusion. Schematic on right shows how SPP scattering is dramatically suppressed when the optical space around the protrusion is transformed. (Image courtesy of Zhang group)
So far transformation optics have delivered only hints as to what the future might hold, with a major roadblock being how difficult it is to modify the physical properties of metamaterials at the nano or subwavelength scale, mainly because of the metals. Now, a team of researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley have shown it might be possible to go around that metal roadblock. Using sophisticated computer simulations, they have demonstrated that with only moderate modifications of the dielectric component of a metamaterial, it should be possible to achieve practical transformation optics results. The key to success is the combination of transformation optics with another promising new field of science known as plasmonics.
A plasmon is an electronic surface wave that rolls through the sea of conduction electrons on a metal. Just as the energy in waves of light is carried in quantized particle-like units called photons, so, too, is plasmonic energy carried in quasi-particles called plasmons. Plasmons will interact strongly with photons at the interface of a metamaterial’s metal and dielectric to form yet another quasi-particle called a surface plasmon polariton(SPP). Manipulation of these SPPs is at the heart of the astonishing optical properties of metamaterials.
Yongmin Liu (left) Xiang Zhang and Thomas Zentgraf used sophisticated compuer modeling to develop a “transformational plasmon optics” technique that may open the door to practical integrated, compact optical data-processing chips. (Photo by Roy Kaltschmidt, Berkeley Lab Public Affairs)
Yongmin Liu (left) Xiang Zhang and Thomas Zentgraf used sophisticated compuer modeling to develop a “transformational plasmon optics” technique that may open the door to practical integrated, compact optical data-processing chips. (Photo by Roy Kaltschmidt, Berkeley Lab Public Affairs)
The Berkeley Lab-UC Berkeley team, led by Xiang Zhang, a principal investigator with Berkeley Lab’s Materials Sciences Division and director of UC Berkeley’s Nano-scale Science and Engineering Center (SINAM), modeled what they have dubbed a “transformational plasmon optics” approach that involved manipulation of the dielectric material adjacent to a metal but not the metal itself. This novel approach was shown to make it possible for SPPs to travel across uneven and curved surfaces over a broad range of wavelengths without suffering significant scattering losses. Using this model, Zhang and his team then designed a plasmonic waveguide with a 180 degree bend that won’t alter the energy or properties of a light beam as it makes the U-turn. They also designed a plasmonic version of a Luneburg lens, the ball-shaped lenses that can receive and resolve optical waves from multiple directions at once.
“Since the metal properties in our metamaterials are completely unaltered, our transformational plasmon optics methodology provides a practical way for routing light at very small scales,” Zhang says. “Our findings reveal the power of the transformation optics technique to manipulate near-field optical waves, and we expect that many other intriguing plasmonic devices will be realized based on the methodology we have introduced.”
Zhang is the corresponding author of a paper describing this research that appeared in the journal Nano Letters, titled “Transformational Plasmon Optics.” Co-authoring the paper with Zhang were Yongmin Liu, Thomas Zentgraf and Guy Bartal.
Field distribution after the transformation of a dielectric material shows the nearly perfect transmission of a light beam around a 180 degree bend. (Image courtesy of Zhang group)
Field distribution after the transformation of a dielectric material shows the nearly perfect transmission of a light beam around a 180 degree bend. (Image courtesy of Zhang group)
Says Liu, who was the lead author of the paper and is a post-doctoral researcher in Zhang’s UC Berkeley group, “In addition to the 180 degree plasmonic bend and the plasmonic Luneburg lens, our approach should also enable the design and production of beam splitters and shifters, and directional light emitters. The technique should also be applicable to the construction of integrated, compact optical data-processing chips.”
Zhang and his research group have been at the forefront of transformation optics research since 2008 when they became the first group to fashion metamaterials that were able to bend light backwards, a property known as “negative refraction,” which is unprecedented in nature. In 2009, he and his group created a “carpet cloak” from nanostructured silicon that concealed the presence of objects placed under it from optical detection.
For this latest work, Zhang and Liu with Zentgraf and Bartal departed from the traditional transformation optics focus on propagation waves and instead focused on the SPPs carried in near-field (subwavelength) region.
“The intensity of SPPs is maximal at the interface between a metal and a dielectric medium and exponentially decays away from the interface,” says Zhang. “Since a significant portion of SPP energy is carried in the evanescent field outside the metal, that is, in the adjacent dielectric medium, we proposed to control SPPs by keeping the metal property fixed and only modifying the dielectric material based on the transformation optics technique.”
In this schematic of a plasmonic Luneburg lens, a dielectric cone is placed on a metal to focus surface plasmon polaritons. (Image courtesy of Zhang group)
In this schematic of a plasmonic Luneburg lens, a dielectric cone is placed on a metal to focus surface plasmon polaritons. (Image courtesy of Zhang group)
Full-wave simulations of different transformed designs proved the proposed methodology by Zhang and his colleagues correct. It was furthermore demonstrated that if a prudent transformational plasmon optics scheme is taken the transformed dielectric materials can be isotropic and nonmagnetic, which further boosts the practicality of this approach. The demonstration of a 180 degree bend plasmonic bend with almost perfect transmission was especially significant.
“Plasmonic waveguides are one of the most important components/elements in integrated plasmonic devices,” says Liu. “However, curvatures often lead to strong radiation loss that reduces the length for transferring an optical signal. Our 180 degree bend plasmonic bend is definitely important and will be useful in the future design of integrated plasmonic devices.”
Compared with silicon-based photonic devices the use of plasmonics could help to further scale- down the total size of photonic devices and increase the interaction of light with certain materials, which should improve performance.
“We envision that the unique design flexibility of the transformational plasmon optics approach may open a new door to nano optics and photonic circuit design,” Zhang says.

Molecular Robots On the Rise

Researchers announce new breakthrough in developing molecules that behave like robots
Artist's conception of the molecular robot moving on a track.
As the robot walks on the substrate, it changes each piece by cleaving off a part.
Researchers from Columbia University, Arizona State University, the University of Michigan and the California Institute of Technology (Caltech) have created and programmed robots the size of single molecule that can move independently across a nano-scale track. This development, outlined in the May 13 edition of the journal Nature, marks an important advancement in the nascent fields of molecular computing and robotics, and could someday lead to molecular robots that can fix individual cells or assemble nanotechnology products.
The project was led by Milan N. Stojanovic, a faculty member in the division of experimental therapeutics at Columbia University, who partnered with Erik Winfree, associate professor of computer science at Caltech, Hao Yan, professor of chemistry and biochemistry at Arizona State University and an expert in DNA nanotechnology, and with Nils G. Walter, professor of chemistry and director of the Single Molecule Analysis in Real-Time (SMART) Center at the University of Michigan in Ann Arbor. Their work was supported in part by the National Science Foundation.
The word ‘robot' makes most people think of solid machines that use computer circuitry to perform defined jobs, such as vacuuming a carpet or welding together automobiles. In recent years, scientists have worked to create robots that could also reliably perform useful tasks, but at a molecular level. This is, needless to say, not a simple endeavor, and it involves reprogramming DNA molecules to perform in specific ways. "Can you instruct a biomolecule to move and function in a certain way--researchers at the interface of computer science, chemistry, biology and engineering are attempting to do just that," says Mitra Basu, a program director at NSF responsible for the agency's support to this research.
Recent molecular robotics work has produced so-called DNA walkers, or strings of reprogrammed DNA with 'legs' that enabled them to briefly walk. Now this research team has shown these molecular robotic spiders can in fact move autonomously through a specially-created, two-dimensional landscape. The spiders acted in rudimentary robotic ways, showing they are capable of starting motion, walking for awhile, turning, and stopping.
In addition to be incredibly small--about 4 nanometers in diameter--the walkers are also move slowly, covering 100 nanometers in times ranging 30 minutes to a full hour by taking approximately 100 steps. This is a significant improvement over previous DNA walkers that were capable of only about three steps.
While the field of molecular robotics is still emerging, it is possible that these tiny creations may someday have important medical applications. "This work one day may lead to effective control of chronic diseases such as diabetes or cancer," Basu says.
According to Stojanovich, these practical applications are still many years off, but he and his colleagues hope to continue their work in to the foundations of this young field. Stojanovich believes that their future work will also require extensive collaborations, with each of them bringing a specific expertise to the table, as was the case in the research published today. "If you take anyone of us with our disciplinary expertise out of this," Stojanovic said in an interview "this paper would have collapsed and never be what it is now."

National Science Foundation Launches Green Revolution Video Series

A fresh take on cutting edge research to develop and improve the use of clean energy sources
The Green Revolution video series features cutting edge research on clean energy technologies.
The Green Revolution video series features cutting edge research on clean energy technologies.

Today the National Science Foundation released online its "Green Revolution" video series. These educational videos, each about five minutes long, feature scientists and engineers who are working to develop and improve the use of clean energy sources, new fuels and other energy-related technologies. Each segment explores the research carried out by men and women at the forefront of discovery and innovation related to clean energy, as well as some of the basic science behind their work.
During a speech at the National Academy of Sciences last year, President Obama spoke of the need to "spark a sense of wonder and excitement" in the nation's young people to pursue careers in science and engineering. As today's researchers develop new ways to convert sunlight to electricity, distribute energy with a smart grid and store clean energy with advanced batteries, they blaze the trail for future explorers and inventors. The Administration's "New Energy for America" plan will provide the opportunity for thousands of American students to pursue careers in science, engineering, and entrepreneurship related to clean energy. These young men and women will invent and help commercialize advanced energy technologies of the future to capture, share and store energy obtained from clean energy sources.
As part of a science and engineering initiative to educate students in fields contributing to energy science and engineering systems, the "Green Revolution" series aims to encourage people to ask questions and look beyond fossil fuels for innovative solutions to our ever-growing energy needs. Each episode is accompanied by supplemental materials for educators, including brief descriptions of the scientific concepts relevant to the technology. Additional videos are scheduled for release this summer.

Researchers Demonstrate New Understanding of Nanotube Growth

Scientists take first step toward controlling the growth of nanomaterials without catalysts
A schematic illustration showing the formation of nanotubes driven by screw dislocations.
A schematic illustration showing the formation of nanotubes driven by screw dislocations.
Credit and Larger Version
April 22, 2010

Researchers at the University of Wisconsin-Madison recently made a significant first step toward understanding how to control the growth of the nanotubes, nanowires and nanorods needed for renewable energy and other technology applications.
These nanocrystalline materials, or nanomaterials, possess unique chemical and physical properties that can be used in solar energy panels, high energy density batteries, or better electronics. But, writing in the April 23 edition of the journal Science, a UW-Madison research team notes that the formation of these materials is often not well understood.
In particular, the question of how one-dimensional (1D) crystals grow sometimes without catalysts has been troublesome for scientists and engineers who need to produce large amounts of nanomaterials for specific applications. Working with zinc oxide, a common semiconductor widely used as a nanomaterial, assistant professor of chemistry Song Jin and his students demonstrated a new understanding of the subject by showing that nanotubes can be formed solely due to the strain energy and screw dislocations that drive their growth.
Screw dislocations are frequently observed defects in crystalline materials that can be thought of as a screw or a helical staircase that can drive fast 1D crystal growth. But these defects produce strain and stress during nanotube formation.
"The strain energy within dislocation-driven nanomaterials dictates if the material will be hollow or solid," explained Jin. "Tubes are formed when strain energy gets large enough and the center of the nanostructure hollows out to relieve the stress and strain."
Jin and his students investigated the possibility of dislocation-driven growth by carefully regulating the amount of available nanotube building blocks in a solution. Essentially, the team controllably oversaturated or supersaturated a vat of water with zinc salts to favor dislocation-driven growth and observe the formation of solid nanowires and hollow nanotubes.
This mechanism differs from previous growth strategies in that it doesn't require a catalyst or a template to produce nanotubes, but relies solely on a dislocation and the strain energy associated with it. A catalyst is usually another metal nanoparticle such as gold added to the growth process, which in turn drives 1D growth.
"Once we understand that the growth of these 1D nanomaterials can be driven by screw dislocations, we can see nanotubes and nanowires are related." said Jin. "Furthermore, we've shown that growth of nanotubes or nanowires without the use of a catalyst in solutions can be rationally designed by following a fundamental understanding of crystal growth theories and the concept of dislocation-driven nanomaterial growth.
"For more practical purposes, we think that this work provides a general theoretical framework for controlling solution nanowire/nanotube growth that can be applicable to many other materials," Jin said.
Growing large amounts of nanotubes or nanowires from water-based solutions without a catalyst would be much more cost-effective. "This could open up the exploitation of large scale/low cost solution growth for rational catalyst-free synthesis of 1D nanomaterials," said Jin.
Scientists take first step toward controlling the growth of nanomaterials without catalysts
A schematic illustration showing the formation of nanotubes driven by screw dislocations.
A schematic illustration showing the formation of nanotubes driven by screw dislocations.
Researchers at the University of Wisconsin-Madison recently made a significant first step toward understanding how to control the growth of the nanotubes, nanowires and nanorods needed for renewable energy and other technology applications.
These nanocrystalline materials, or nanomaterials, possess unique chemical and physical properties that can be used in solar energy panels, high energy density batteries, or better electronics. But, writing in the April 23 edition of the journal Science, a UW-Madison research team notes that the formation of these materials is often not well understood.
In particular, the question of how one-dimensional (1D) crystals grow sometimes without catalysts has been troublesome for scientists and engineers who need to produce large amounts of nanomaterials for specific applications. Working with zinc oxide, a common semiconductor widely used as a nanomaterial, assistant professor of chemistry Song Jin and his students demonstrated a new understanding of the subject by showing that nanotubes can be formed solely due to the strain energy and screw dislocations that drive their growth.
Screw dislocations are frequently observed defects in crystalline materials that can be thought of as a screw or a helical staircase that can drive fast 1D crystal growth. But these defects produce strain and stress during nanotube formation.
"The strain energy within dislocation-driven nanomaterials dictates if the material will be hollow or solid," explained Jin. "Tubes are formed when strain energy gets large enough and the center of the nanostructure hollows out to relieve the stress and strain."
Jin and his students investigated the possibility of dislocation-driven growth by carefully regulating the amount of available nanotube building blocks in a solution. Essentially, the team controllably oversaturated or supersaturated a vat of water with zinc salts to favor dislocation-driven growth and observe the formation of solid nanowires and hollow nanotubes.
This mechanism differs from previous growth strategies in that it doesn't require a catalyst or a template to produce nanotubes, but relies solely on a dislocation and the strain energy associated with it. A catalyst is usually another metal nanoparticle such as gold added to the growth process, which in turn drives 1D growth.
"Once we understand that the growth of these 1D nanomaterials can be driven by screw dislocations, we can see nanotubes and nanowires are related." said Jin. "Furthermore, we've shown that growth of nanotubes or nanowires without the use of a catalyst in solutions can be rationally designed by following a fundamental understanding of crystal growth theories and the concept of dislocation-driven nanomaterial growth.
"For more practical purposes, we think that this work provides a general theoretical framework for controlling solution nanowire/nanotube growth that can be applicable to many other materials," Jin said.
Growing large amounts of nanotubes or nanowires from water-based solutions without a catalyst would be much more cost-effective. "This could open up the exploitation of large scale/low cost solution growth for rational catalyst-free synthesis of 1D nanomaterials," said Jin.

A Tiny Defect That May Create Smaller, Faster Electronics

Researchers at the University of South Florida have developed a technique to turn defects in graphene into tiny metallic wires
An artist's conception of a row of intentional molecular defects in a sheet of graphene.
An artist's conception of a row of intentional molecular defects in a sheet of graphene.
When most of us hear the word 'defect', we think of a problem that has to be solved. But a team of researchers at the University of South Florida (USF) created a new defect that just might be a solution to a growing challenge in the development of future electronic devices.
The team lead by USF Professors Matthias Batzill and Ivan Oleynik, whose discovery was published yesterday in the journal Nature Nanotechnology, have developed a new method for adding an extended defect to graphene, a one-atom-thick planar sheet of carbon atoms that many believe could replace silicon as the material for building virtually all electronics.
It is not simple to work with graphene, however. To be useful in electronic applications like integrated circuits, small defects must be introduced to the material. Previous attempts at making the necessary defects have either proved inconsistent or produced samples in which only the edges of thin strips of graphene or graphene nanoribbons possessed a useful defect structure. However, atomically-sharp edges are difficult to create due to natural roughness and the uncontrolled chemistry of dangling bonds at the edge of the samples.
The USF team has now found a way to create a well-defined, extended defect several atoms across, containing octagonal and pentagonal carbon rings embedded in a perfect graphene sheet. This defect acts as a quasi-one-dimensional metallic wire that easily conducts electric current. Such defects could be used as metallic interconnects or elements of device structures of all-carbon, atomic-scale electronics.
So how did the team do it? The experimental group, guided by theory, used the self-organizing properties of a single-crystal nickel substrate, and used a metallic surface as a scaffold to synthesize two graphene half-sheets translated relative to each other with atomic precision. When the two halves merged at the boundary, they naturally formed an extended line defect. Both scanning tunneling microscopy and electronic structure calculations were used to confirm that this novel one-dimensional carbon defect possessed a well-defined, periodic atomic structure, as well as metallic properties within the narrow strip along the defect.
This tiny wire could have a big impact on the future of computer chips and the myriad of devices that use them. In the late 20th century, computer engineers described a phenomenon called Moore's Law, which holds that the number of transistors that can be affordably built into a computer processor doubles roughly every two years. This law has proven correct, and society has been reaping the benefits as computers become faster, smaller, and cheaper. In recent years, however, some physicists and engineers have come to believe that without new breakthroughs in new materials, we may soon reach the end of Moore's Law. As silicon-based transistors are brought down to their smallest possible scale, finding ways to pack more on a single processor becomes increasingly difficult.
Metallic wires in graphene may help to sustain the rate of microprocessor technology predicted by Moore's Law well into the future. The discovery by the USF team, with support from the National Science Foundation, may open the door to creation of the next generation of electronic devices using novel materials. Will this new discovery be available immediately in new nano-devices? Perhaps not right away, but it may provide a crucial step in the development of smaller, yet more powerful, electronic devices in the not-too-distant future.