Kamis, 02 Desember 2010

Discovery at Young Star Hints Magnetism Common to All Cosmic Jets

Astronomers have found the first evidence of a magnetic field in a jet of material ejected from a young star, a discovery that points toward future breakthroughs in understanding the nature of all types of cosmic jets and of the role of magnetic fields in star formation.
IRAS 18162-2048
Radio-Infrared Image of IRAS 18162-2048

Radio jets emitted by young star shown in yellow
on background of infrared image from Spitzer
Space Telescope. Yellow bars show orientation of
magnetic field in jet as measured by VLA. Green bars
show magnetic-field orientation in the dusty envelope
surrounding the young star. Two other young stars are
seen at sides of the jet.
CREDIT: Carrasco-Gonzalez et al., Curran et al.,
Bill Saxton, NRAO/AUI/NSF, NASA


Throughout the Universe, jets of subatomic particles are ejected by three phenomena: the supermassive black holes at the cores of galaxies, smaller black holes or neutron stars consuming material from companion stars, and young stars still in the process of gathering mass from their surroundings. Previously, magnetic fields were detected in the jets of the first two, but until now, magnetic fields had not been confirmed in the jets from young stars.
"Our discovery gives a strong hint that all three types of jets originate through a common process," said Carlos Carrasco-Gonzalez, of the Astrophysical Institute of Andalucia Spanish National Research Council (IAA-CSIC) and the National Autonomous University of Mexico (UNAM).
The astronomers used the National Science Foundation's Very Large Array (VLA) radio telescope to study a young star some 5,500 light-years from Earth, called IRAS 18162-2048. This star, possibly as massive as 10 Suns, is ejecting a jet 17 light-years long. Observing this object for 12 hours with the VLA, the scientists found that radio waves from the jet have a characteristic indicating they arose when fast-moving electrons interacted with magnetic fields. This characteristic, called polarization, gives a preferential alignment to the electric and magnetic fields of the radio waves.
"We see for the first time that a jet from a young star shares this common characteristic with the other types of cosmic jets," said Luis Rodriguez, of UNAM.
The discovery, the astronomers say, may allow them to gain an improved understanding of the physics of the jets as well as of the role magnetic fields play in forming new stars. The jets from young stars, unlike the other types, emit radiation that provides information on the temperatures, speeds, and densities within the jets. This information, combined with the data on magnetic fields, can improve scientists' understanding of how such jets work.
"In the future, combining several types of observations could give us an overall picture of how magnetic fields affect the young star and all its surroundings. This would be a big advance in understanding the process of star formation," Rodriguez said.
Carrasco-Gonzalez and Rodriguez worked with Guillem Anglada and Mayra Osorio of the Astrophysical Institute of Andalucia, Josep Marti of the University of Jaen in Spain, and Jose Torrelles of the University of Barcelona. The scientists reported their findings in the November 26 edition of Science.

Rabu, 01 Desember 2010

Making Stars: Studies Show How Cosmic Dust and Gas Shape Galaxy Evolution

This series of images shows a simulation of galaxy formation 
occurring early in the history of the universe. The simulation was 
performed by Fermilab’s Nickolay Gnedin and the University of Chicago’s 
Andrey Kravtsov at the National Center for Supercomputing Applications 
in Urbana–Champaign. Yellow dots are young stars. Blue fog shows the 
neutral gas. Red surface indicates molecular gas. The starry background 
has been added for aesthetic effect. This series of images shows a simulation of galaxy formation occurring early in the history of the universe. The simulation was performed by Fermilab’s Nickolay Gnedin and the University of Chicago’s Andrey Kravtsov at the National Center for Supercomputing Applications in Urbana–Champaign. Yellow dots are young stars. Blue fog shows the neutral gas. Red surface indicates molecular gas. The starry background has been added for aesthetic effect.
(Nick Gnedin)

Astronomers find cosmic dust annoying when it blocks their view of the heavens, but without it the universe would be devoid of stars. Cosmic dust is the indispensable ingredient for making stars and for understanding how primordial diffuse gas clouds assemble themselves into full–blown galaxies.
“Formation of galaxies is one of the biggest remaining questions in astrophysics,” said Andrey Kravtsov, associate professor in astronomy & astrophysics at the University of Chicago.
Astrophysicists are moving closer to answering that question, thanks to a combination of new observations and supercomputer simulations, including those conducted by Kravtsov and Nick Gnedin, a physicist at Fermi National Accelerator Laboratory.
Gnedin and Kravtsov published new results based on their simulations in the May 1, 2010 issue of The Astrophysical Journal, explaining why stars formed more slowly in the early history of the universe than they did much later. The paper quickly came to the attention of Robert C. Kennicutt Jr., director of the University of Cambridge’s Institute of Astronomy and co–discoverer of one of the key observational findings about star formation in galaxies, known as the Kennicutt–Schmidt relation.
In the June 3, 2010 issue of Nature, Kennicutt noted that the recent spate of observations and theoretical simulations bodes well for the future of astrophysics. In their Astrophysical Journal paper, Kennicutt wrote, “Gnedin and Kravtsov take a significant step in unifying these observations and simulations, and provide a prime illustration of the recent progress in the subject as a whole.”
Star–formation law
Kennicutt’s star–formation law relates the amount of gas in galaxies in a given area to the rate at which it turns into stars over the same area. The relation has been quite useful when applied to galaxies observed late in the history of the universe, but recent observations by Arthur Wolfe of the University of California, San Diego, and Hsiao–Wen Chen, assistant professor in astronomy and astrophysics at UChicago, indicate that the relation fails for galaxies observed during the first two billion years following the big bang.
Gnedin and Kravtsov’s work successfully explains why. “What it shows is that at early stages of evolution, galaxies were much less efficient in converting their gas into stars,” Kravtsov said.
Stellar evolution leads to increasing abundance of dust, as stars produce elements heavier than helium, including carbon, oxygen, and iron, which are key elements in dust particles.
“Early on, galaxies didn’t have enough time to produce a lot of dust, and without dust it’s very difficult to form these stellar nurseries,” Kravtsov said. “They don’t convert the gas as efficiently as galaxies today, which are already quite dusty.”
The star–formation process begins when interstellar gas clouds become increasingly dense. At some point the hydrogen and helium atoms start combining to form molecules in certain cold regions of these clouds. A hydrogen molecule forms when two hydrogen atoms join. They do so inefficiently in empty space, but find each other more readily on the surface of a cosmic dust particle.
“The biggest particles of cosmic dust are like the smallest particles of sand on good beaches in Hawaii,” Gnedin said.
These hydrogen molecules are fragile and easily destroyed by the intense ultraviolet light emitted from massive young stars. But in some galactic regions dark clouds, so–called because of the dust they contain, form a protective layer that protects the hydrogen molecules from the destructive light of other stars.
Stellar nurseries
“I like to think about stars as being very bad parents, because they provide a bad environment for the next generation,” Gnedin joked. The dust therefore provides a protective environment for stellar nurseries, Kravtsov noted.
“There is a simple connection between the presence of dust in this diffuse gas and its ability to form stars, and that’s something that we modeled for the first time in these galaxy–formation simulations,” Kravtsov said. “It’s very plausible, but we don’t know for sure that that’s exactly what’s happening.”
The Gnedin–Kravtsov model also provides a natural explanation for why spiral galaxies predominately fill the sky today, and why small galaxies form stars slowly and inefficiently.
“We usually see very thin disks, and those types of systems are very difficult to form in galaxy–formation simulations,” Kravtsov said.
That’s because astrophysicists have assumed that galaxies formed gradually through a series of collisions. The problem: simulations show that when galaxies merge, they form spheroidal structures that look more elliptical than spiral.
But early in the history of the universe, cosmic gas clouds were inefficient at making stars, so they collided before star formation occurred. “Those types of mergers can create a thin disk,” Kravtsov said.
As for small galaxies, their lack of dust production could account for their inefficient star formation. “All of these separate pieces of evidence that existed somehow all fell into one place,” Gnedin observed. “That’s what I like as a physicist because physics, in general, is an attempt to understand unifying principles behind different phenomena.”
More work remains to be done, however, with input from newly arrived postdoctoral fellows at UChicago and more simulations to be performed on even more powerful supercomputers. “That’s the next step,” Gnedin said.

UH Physicists Study Behavior of Enzyme Linked to Alzheimer's, Cancer

Margaret Cheung, assistant professor of physics at UH, and Antonios Samiotakis, a physics Ph.D. student, described their findings in a paper titled “Structure, function, and folding of phosphoglycerate kinase (PGK) are strongly perturbed by macromolecular crowding,” published in a recent issue of the journal Proceedings of the National Academy of Sciences, one of the world’s most-cited multidisciplinary scientific serials. The research was funded by a nearly $224,000 National Science Foundation grant in support of Samiotakis’ dissertation.

“Imagine you’re walking down the aisle toward an exit after a movie in a crowded theatre. The pace of your motion would be slowed down by the moving crowd and narrow space between the aisles. However, you can still maneuver your arm, stretch out and pat your friend on the shoulder who slept through the movie,” Cheung said. “This can be the same environment inside a crowded cell from the viewpoint of a protein, the workhorse of all living systems. Proteins always ‘talk’ to each other inside cells, and they pass information about what happens to the cell and how to respond promptly. Failure to do so may cause uncontrollable cell growth that leads to cancer or cause malfunction of a cell that leads to Alzheimer’s disease. Understanding a protein inside cells – in terms of structures and enzymatic activity – is important to shed light on preventing, managing or curing these diseases at a molecular level.”

Cheung, a theoretical physicist, and Martin Gruebele, her experimental collaborator at the University of Illinois at Urbana-Champaign, led a team that unlocked this mystery. Studying the PGK enzyme, Cheung used computer models that simulate the environment inside a cell. Biochemists typically study proteins in water, but such test tube research is limited because it cannot gauge how a protein actually functions inside a crowded cell, where it can interact with DNA, ribosomes and other molecules.

The PGK enzyme plays a key role in the process of glycolysis, which is the metabolic breakdown of glucose and other sugars that releases energy in the form of ATP. ATP molecules are basically like packets of fuel that power biological molecular motors. This conversion of food to energy is present in every organism, from yeast to humans. Malfunction of the glycolytic pathway has been linked to Alzheimer’s disease and cancer. Patients with reduced metabolic rates in the brain have been found to be at risk for Alzheimer’s disease, while out-of-control metabolic rates are believed to fuel the growth of malignant tumor cells.

Scientists had previously believed that a PGK enzyme shaped like Pac-Man had to undergo a dynamic hinge motion to perform its metabolic function. However, in the computer models mimicking the cell interior, Cheung found that the enzyme was already functioning in its closed Pac-Man state in the jam-packed surrounding. In fact, the enzyme was 15 times more active in the tight spaces of a crowded cell. This shows that in cell-like conditions the function of a protein is more active and efficient than in a dilute condition, such as a test tube. This finding can drastically transform how scientists view proteins and their behavior when the environment of a cell is taken into account.

“This work deepens researchers’ understanding of how proteins function, or don’t function, in real cell conditions,” Samiotakis said. “By understanding the impact of a crowded cell on the structure, dynamics of proteins can help researchers design efficient therapeutic means that will work better inside cells, with the goal to prevent diseases and improve human health.”

Cheung and Samiotakis’ computer simulations – performed using the supercomputers at the Texas Learning and Computation Center (TLC2) – were coupled with in vitro experiments by Gruebele and his team. Using the high-performance computing resources of TLC2 factored significantly in the success of their work.

“Picture having a type of medicine that can precisely recognize and target a key that causes Alzheimer’s or cancer inside a crowded cell. Envision, then, the ability to switch a sick cell like this back to its healthy form of interaction at a molecular level,” Cheung said. “This may become a reality in the near future. Our lab at UH is working toward that vision.”

Bacteria Use ‘Toxic Darts' to Disable Each Other, According to UCSB Scientists

(Santa Barbara, Calif.) –– In nature, it's a dog-eat-dog world, even in the realm of bacteria. Competing
Stephanie K. Aoki (front)  Elie J. Diner, David Low,  Christopher 
Hayes   (back, left to right)  credit: George Foulsham,   Office of 
Public Affairs, UCSB
Click for downloadable image
Stephanie K. Aoki (front)
Elie J. Diner, David Low,
Christopher Hayes
(back, left to right)
credit: George Foulsham,
Office of Public Affairs, UCSB

Illustration of contact dependent   growth inhibition (CDI)   
credit: Stephanie K. Aoki
Click for downloadable image
Illustration of contact dependent
growth inhibition (CDI)
credit: Stephanie K. Aoki

Image shows   CDI+ E. coli bacteria (green)   interacting with 
target bacteria   lacking a CDI system (red)  credit: Stephanie K. Aoki
Click for downloadable image
Image shows
CDI+ E. coli bacteria (green)
interacting with target bacteria
lacking a CDI system (red)
credit: Stephanie K. Aoki
bacteria use "toxic darts" to disable each other, according to a new study by UC Santa Barbara biologists. Their research is published in the journal Nature.
"The discovery of toxic darts could eventually lead to new ways to control disease-causing pathogens," said Stephanie K. Aoki, first author and postdoctoral fellow in UCSB's Department of Molecular, Cellular, and Developmental Biology (MCDB). "This is important because resistance to antibiotics is on the rise."
Second author Elie J. Diner, a graduate student in biomolecular sciences and engineering, said: "First we need to learn the rules of this bacterial combat. It turns out that there are many ways to kill your neighbors; bacteria carry a wide range of toxic darts."
The scientists studied many bacterial species, including some important pathogens. They found that bacterial cells have stick-like proteins on their surfaces, with toxic dart tips. These darts are delivered to competing neighbor cells when the bacteria touch. This process of touching and injecting a toxic dart is called "contact dependent growth inhibition," or CDI.
Some targets have a biological shield. Bacteria protected by an immunity protein can resist the enemy's disabling toxic darts. This immunity protein is called "contact dependent growth inhibition immunity." The protein inactivates the toxic dart.
The UCSB team discovered a wide variety of potential toxic-tip proteins carried by bacteria cells –– nearly 50 distinct types have been identified so far, according to Christopher Hayes, co-author an associate professor at MCDB. Each bacterial cell must also have immunity to its own toxic dart. Otherwise, carrying the ammunition would cause cell suicide.
Surprisingly, when a bacterial cell is attacked –– and has no immunity protein –– it may not die. However, it often ceases to grow. The cell is inactivated, inhibited from growth. Similarly, many antibiotics do not kill bacteria; they only prevent the bacteria from growing. Then the body flushes out the dormant cells.
Some toxic tips appear to function inside the targeted bacteria by cutting up enemy RNA so the cell can no longer synthesize protein and grow. Other toxic tips operate by cutting up enemy DNA, which prevents replication of the cell.
"Our data indicate that CDI systems are also present in a broad range of bacteria, including important plant and animal pathogens such as E. coli which causes urinary tract infections, and Yersinia species, including the causative agent of plague," said senior author David Low, professor of MCDB. "Bacteria may be using these systems to compete with one another in the soil, on plants, and in animals. It's an amazingly diverse world."
The team studied the bacteria responsible for soft rot in potatoes, called Dickeya dadantii. This bacteria also invades chicory leaves, chrisanthemums, and other vegetables and plants.
Funding for this research came from the National Science Foundation and the National Institutes of Health. The TriCounty Blood Bank also provided funding.
The research was performed in the Low and Hayes lab in MCDB. Important contributions were made Stephen J. Poole, associate professor in MCDB, and by Peggy Cotter's lab when she was with MCDB. Cotter has since moved to the University of North Carolina School of Medicine. Other co-authors include Claire t'Kint de Roodenbeke, research associate; Brandt R. Burgess, postdoctoral fellow; Bruce A. Braaten, research scientist; Alison M. Jones, technician; and Julia S. Webb, graduate student.

Antihydrogen Trapped for First Time

Physicists working at the European Organization for Nuclear Research (CERN) in Geneva, Switzerland, have succeeded in trapping antihydrogen — the antimatter equivalent of the hydrogen atom — a milestone that could soon lead to experiments on a form of matter that disappeared mysteriously shortly after the birth of the universe 14 billion years ago.
artist's rendering of simple octupole magnetic field is produced 
by eight bar magnets in a plane with their north and south polesAn octupole magnet was critical to trapping antihydrogen atoms. A simple octupole magnetic field is produced by eight bar magnets in a plane with their north and south poles arrayed radially to create a magnetic minimum at the center. The antihydrogen atom is trapped in the center because of its magnetic moment, which itself is equivalent to a tiny bar magnet. The bar magnets above and below the octupole plane in this artist's rendition represent the mirror magnets that keep the atoms from squirting out the ends of the trap. (Katie Bertsche)
The first artificially produced low energy antihydrogen atoms — consisting of a positron, or antimatter electron, orbiting an antiproton nucleus — were created at CERN in 2002, but until now the atoms have struck normal matter and annihilated in a flash of gamma-rays within microseconds of creation.
The ALPHA (Antihydrogen Laser PHysics Apparatus) experiment, an international collaboration that includes physicists from the University of California, Berkeley, and Lawrence Berkeley National Laboratory (LBNL), has now trapped 38 antihydrogen atoms, each for more than one-tenth of a second.
While the number and lifetime are insufficient to threaten the Vatican — in the 2000 novel and 2009 movie "Angels & Demons," a hidden vat of potentially explosive antihydrogen was buried under St. Peter's Basilica in Rome — it is a starting point for learning new physics, the researchers said.
"We are getting close to the point at which we can do some classes of experiments on the properties of antihydrogen," said Joel Fajans, UC Berkeley professor of physics, LBNL faculty scientist and ALPHA team member. "Initially, these will be crude experiments to test CPT symmetry, but since no one has been able to make these types of measurements on antimatter atoms at all, it's a good start."
CPT (charge-parity-time) symmetry is the hypothesis that physical interactions look the same if you flip the charge of all particles, change their parity — that is, invert their coordinates in space — and reverse time. Any differences between antihydrogen and hydrogen, such as differences in their atomic spectrum, automatically violate CPT, overthrow today's "standard model" of particles and their interactions, and may explain why antimatter, created in equal amounts during the universe's birth, is largely absent today.
The team's results were published online Nov. 17 in advance of print publication in the British journal Nature.
Antimatter, first predicted by physicist Paul Dirac in 1931, has the opposite charge of normal matter and annihilates completely in a flash of energy upon interaction with normal matter. While astronomers see no evidence of significant antimatter annihilation in space, antimatter is produced during high-energy particle interactions on earth and in some decays of radioactive elements. UC Berkeley physicists Emilio Segre and Owen Chamberlain created antiprotons in the Bevatron accelerator at the Lawrence Radiation Laboratory, now LBNL, in 1955, confirming their existence and earning the scientists the 1959 Nobel Prize in physics.
Slow antihydrogen was produced at CERN in 2002 thanks to an antiproton decelerator that slowed antiprotons enough for them to be used in experiments that combined them with a cloud of positrons. The ATHENA experiment, a broad international collaboration, reported the first detection of cold antihydrogen, with the rival ATRAP experiment close behind.
The ATHENA experiment closed down in 2004, to be superseded by ALPHA, coordinated by Jeffrey Hangst of the University of Aarhus in Denmark. Since then, the ALPHA and ATRAP teams have competed to trap antihydrogen for experiments, in particular, laser experiments to measure the antihydrogen spectrum (the color with which it glows) — and gravity measurements. Before the recent results, the CERN experiments have produced — only fleetingly — tens of millions of antihydrogen atoms, Fajans said.
ALPHA's approach was to cool antiprotons and compress them into a matchstick-size cloud (20 millimeters long and 1.4 millimeters in diameter). Then, using autoresonance, a technique developed by UC Berkeley visiting professor Lazar Friedland and first explored in plasmas by Fajans and former U.C Berkeley graduate student Erik Gilson, the cloud of cold, compressed antiprotons is nudged to overlap a like-size positron cloud, where the two particles mate to form antihydrogen.
Joel Fajans, professor of physics (Photo by Niels Madsen)
All this happens inside a magnetic bottle that traps the antihydrogen atoms. The magnetic trap is a specially configured magnetic field that Fajans and then-UC Berkeley undergraduate Andrea Schmidt first proposed, using an unusual and expensive octupole superconducting magnet to create a more stable plasma.
"For the moment, we keep antihydrogen atoms around for at least 172 milliseconds — about a sixth of a second — long enough to make sure we have trapped them," said colleague Jonathan Wurtele, UC Berkeley professor of physics and LBNL faculty scientist. Wurtele collaborated with LBNL visitor Katia Gomberoff, staff members Alex Friedman, David Grote and Jean-Luc Vay and with Fajans to simulate the new and original magnetic configurations.
Trapping antihydrogen isn't easy, Fajans said, because it is a neutral, or chargeless, particle. Magnetic bottles are generally used to trap charged particles, such as ionized atoms. These charged particles spiral along magnetic field lines until they encounter an electric field that bounces them back towards the center of the bottle.
Neutral antihydrogen, however, would normally be unaffected by these fields. But the team takes advantage of the tiny magnetic moment of the antihydrogen atom to trap it using a steeply increasing field — a so-called magnetic mirror — that reflects them backward toward the center. Because the magnetic moment is so small, the antihydrogen has to be very cold: less than about one-half degree above absolute zero (0.5 Kelvin). That means the team had to slow down the antiprotons by a factor of one hundred billion from their initial energy emerging from the antiproton decelerator.
Once trapped, the experimenters sweep out the lingering antiprotons with an electric field, then shut off the mirror fields and let the trapped antihydrogen atoms annihilate with normal matter. Surrounding detectors are sensitive to the charged pions that result from the proton-antiproton annihilation. Cosmic rays can also set off the detector, but their straight-line tracks can be easily distinguished, Fajans said. A few antiprotons could potentially remain in the trap, and their annihilations would look similar to those of antihydrogen, but the physicists' simulations show that such events can also be successfully distinguished from antihydrogen annihilations.
During August and September of 2010, the team detected an antihydrogen atom in 38 of the 335 cycles of antiproton injection. Given that their detector efficiency is about 50 percent, the team calculated that it captured approximately 80 of the several million antihydrogen atoms produced during these cycles. Experiments in 2009 turned up six candidate antihydrogen atoms, but they have not been confirmed.
ALPHA continues to detect antihydrogen atoms at an increasing rate as the experimenters learn how to better tune their experiment, Fajans said.
Of the 42 co-authors of the new paper, 10 are or were affiliated with UC Berkeley: Fajans; Wurtele; current graduate students Marcelo Baquero-Ruiz, Steve Chapman, Alex Povilus and Chukman So; former graduate student Will Bertsche; former sabbatical visitor Eli Sarid; and past visitors Daniel Silveira and Dirk van der Werf. Other UC Berkeley contributors to the research are former undergraduates Crystal Bray, Patrick Ko and Korana Burke, and former graduate student Erik Gilson. Other LBNL contributors include Alex Friedman, David Grote, Jean-Luc Vay and former visiting scientists Katia Gomberoff and Alon Deutsch.

Physicists Demonstrate a Four-Fold Quantum Memory

Caltech Physicists Demonstrate a Four-Fold Quantum Memory

PASADENA, Calif. — Researchers at the California Institute of Technology (Caltech) have demonstrated quantum entanglement for a quantum state stored in four spatially distinct atomic memories.
Their work, described in the November 18 issue of the journal Nature, also demonstrated a quantum interface between the atomic memories—which represent something akin to a computer "hard drive" for entanglement—and four beams of light, thereby enabling the four-fold entanglement to be distributed by photons across quantum networks. The research represents an important achievement in quantum information science by extending the coherent control of entanglement from two to multiple (four) spatially separated physical systems of matter and light.
The proof-of-principle experiment, led by William L. Valentine Professor and professor of physics H. Jeff Kimble, helps to pave the way toward quantum networks. Similar to the Internet in our daily life, a quantum network is a quantum "web" composed of many interconnected quantum nodes, each of which is capable of rudimentary quantum logic operations (similar to the "AND" and "OR" gates in computers) utilizing "quantum transistors" and of storing the resulting quantum states in quantum memories. The quantum nodes are "wired" together by quantum channels that carry, for example, beams of photons to deliver quantum information from node to node. Such an interconnected quantum system could function as a quantum computer, or, as proposed by the late Caltech physicist Richard Feynman in the 1980s, as a "quantum simulator" for studying complex problems in physics.
Quantum entanglement is a quintessential feature of the quantum realm and involves correlations among components of the overall physical system that cannot be described by classical physics. Strangely, for an entangled quantum system, there exists no objective physical reality for the system's properties. Instead, an entangled system contains simultaneously multiple possibilities for its properties. Such an entangled system has been created and stored by the Caltech researchers.
Previously, Kimble's group entangled a pair of atomic quantum memories and coherently transferred the entangled photons into and out of the quantum memories (http://media.caltech.edu/press_releases/13115). For such two-component—or bipartite—entanglement, the subsystems are either entangled or not. But for multi-component entanglement with more than two subsystems—or multipartite entanglement—there are many possible ways to entangle the subsystems. For example, with four subsystems, all of the possible pair combinations could be bipartite entangled but not be entangled over all four components; alternatively, they could share a "global" quadripartite (four-part) entanglement.
Hence, multipartite entanglement is accompanied by increased complexity in the system. While this makes the creation and characterization of these quantum states substantially more difficult, it also makes the entangled states more valuable for tasks in quantum information science.
The fluorescence from the four atomic ensembles. These ensembles are the four quantum memories that store an entangled quantum state.
[Credit: Nature/Caltech/Akihisa Goban]
To achieve multipartite entanglement, the Caltech team used lasers to cool four collections (or ensembles) of about one million Cesium atoms, separated by 1 millimeter and trapped in a magnetic field, to within a few hundred millionths of a degree above absolute zero. Each ensemble can have atoms with internal spins that are "up" or "down" (analogous to spinning tops) and that are collectively described by a "spin wave" for the respective ensemble. It is these spin waves that the Caltech researchers succeeded in entangling among the four atomic ensembles.
The technique employed by the Caltech team for creating quadripartite entanglement is an extension of the theoretical work of Luming Duan, Mikhail Lukin, Ignacio Cirac, and Peter Zoller in 2001 for the generation of bipartite entanglement by the act of quantum measurement. This kind of "measurement-induced" entanglement for two atomic ensembles was first achieved by the Caltech group in 2005 (http://media.caltech.edu/press_releases/12776).
In the current experiment, entanglement was "stored" in the four atomic ensembles for a variable time, and then "read out"—essentially, transferred—to four beams of light. To do this, the researchers shot four "read" lasers into the four, now-entangled, ensembles. The coherent arrangement of excitation amplitudes for the atoms in the ensembles, described by spin waves, enhances the matter–light interaction through a phenomenon known as superradiant emission.
"The emitted light from each atom in an ensemble constructively interferes with the light from other atoms in the forward direction, allowing us to transfer the spin wave excitations of the ensembles to single photons," says Akihisa Goban, a Caltech graduate student and coauthor of the paper. The researchers were therefore able to coherently move the quantum information from the individual sets of multipartite entangled atoms to four entangled beams of light, forming the bridge between matter and light that is necessary for quantum networks.
The Caltech team investigated the dynamics by which the multipartite entanglement decayed while stored in the atomic memories. "In the zoology of entangled states, our experiment illustrates how multipartite entangled spin waves can evolve into various subsets of the entangled systems over time, and sheds light on the intricacy and fragility of quantum entanglement in open quantum systems," says Caltech graduate student Kyung Soo Choi, the lead author of the Nature paper. The researchers suggest that the theoretical tools developed for their studies of the dynamics of entanglement decay could be applied for studying the entangled spin waves in quantum magnets.
Further possibilities of their experiment include the expansion of multipartite entanglement across quantum networks and quantum metrology. "Our work introduces new sets of experimental capabilities to generate, store, and transfer multipartite entanglement from matter to light in quantum networks," Choi explains. "It signifies the ever-increasing degree of exquisite quantum control to study and manipulate entangled states of matter and light."
In addition to Kimble, Choi, and Goban, the other authors of the paper, "Entanglement of spin waves among four quantum memories," are Scott Papp, a former postdoctoral scholar in the Caltech Center for the Physics of Information now at the National Institute of Standards and Technology in Boulder, Colorado, and Steven van Enk, a theoretical collaborator and professor of physics at the University of Oregon, and an associate of the Institute for Quantum Information at Caltech.
This research was funded by the National Science Foundation, the National Security Science and Engineering Faculty Fellowship program at the U.S. Department of Defense (DOD), the Northrop Grumman Corporation, and the Intelligence Advanced Research Projects Activity.

Email this pagePrint this page Bookmark and Share News From the Field Pushing Black-hole Mergers to the Extreme: RIT Scientists Achieve 100:1 Mass Ratio

‘David and Goliath’ scenario explores extreme mass ratios (Goliath wins)

Scientists have simulated, for the first time, the merger of two black holes of vastly different sizes, with one mass 100 times larger than the other. This extreme mass ratio of 100:1 breaks a barrier in the fields of numerical relativity and gravitational wave astronomy.
Until now, the problem of simulating the merger of binary black holes with extreme size differences had remained an unexplored region of black-hole physics.
“Nature doesn’t collide black holes of equal masses,” says Carlos Lousto, associate professor of mathematical sciences at Rochester Institute of Technology and a member of the Center for Computational Relativity and Gravitation. “They have mass ratios of 1:3, 1:10, 1:100 or even 1:1 million. This puts us in a better situation for simulating realistic astrophysical scenarios and for predicting what observers should see and for telling them what to look for.
“Leaders in the field believed solving the 100:1 mass ratio problem would take five to 10 more years and significant advances in computational power. It was thought to be technically impossible.”
“These simulations were made possible by advances both in the scaling and performance of relativity computer codes on thousands of processors, and advances in our understanding of how gauge conditions can be modified to self-adapt to the vastly different scales in the problem,” adds Yosef Zlochower, assistant professor of mathematical sciences and a member of the center.
A paper announcing Lousto and Zlochower’s findings was submitted for publication in Physical Review Letters.
The only prior simulation describing an extreme merger of black holes focused on a scenario involving a 1:10 mass ratio. Those techniques could not be expanded to a bigger scale, Lousto explained. To handle the larger mass ratios, he and Zlochower developed numerical and analytical techniques based on the moving puncture approach—a breakthrough, created with Manuela Campanelli, director of the Center for Computational Relativity and Gravitation, that led to one of the first simulations of black holes on supercomputers in 2005.
The flexible techniques Lousto and Zlochower advanced for this scenario also translate to spinning binary black holes and for cases involving smaller mass ratios. These methods give the scientists ways to explore mass ratio limits and for modeling observational effects.
Lousto and Zlochower used resources at the Texas Advanced Computer Center, home to the Ranger supercomputer, to process the massive computations. The computer, which has 70,000 processors, took nearly three months to complete the simulation describing the most extreme-mass-ratio merger of black holes to date.
“Their work is pushing the limit of what we can do today,” Campanelli says. “Now we have the tools to deal with a new system.”
Simulations like Lousto and Zlochower’s will help observational astronomers detect mergers of black holes with large size differentials using the future Advanced LIGO (Laser Interferometer Gravitational-wave Observatory) and the space probe LISA (Laser Interferometer Space Antenna). Simulations of black-hole mergers provide blueprints or templates for observational scientists attempting to discern signatures of massive collisions. Observing and measuring gravitational waves created when black holes coalesce could confirm a key prediction of Einstein’s general theory of relativity.