Astronomers have found the first evidence of a magnetic field in a jet of material ejected from a young star, a discovery that points toward future breakthroughs in understanding the nature of all types of cosmic jets and of the role of magnetic fields in star formation.
black holes at the cores of galaxies, smaller black holes or neutron stars consuming material from companion stars, and young stars still in the process of gathering mass from their surroundings. Previously, magnetic fields were detected in the jets of the first two, but until now, magnetic fields had not been confirmed in the jets from young stars.
"Our discovery gives a strong hint that all three types of jets originate through a common process," said Carlos Carrasco-Gonzalez, of the Astrophysical Institute of Andalucia Spanish National Research Council (IAA-CSIC) and the National Autonomous University of Mexico (UNAM).
The astronomers used the National Science Foundation's Very Large Array (VLA) radio telescope to study a young star some 5,500 light-years from Earth, called IRAS 18162-2048. This star, possibly as massive as 10 Suns, is ejecting a jet 17 light-years long. Observing this object for 12 hours with the VLA, the scientists found that radio waves from the jet have a characteristic indicating they arose when fast-moving electrons interacted with magnetic fields. This characteristic, called polarization, gives a preferential alignment to the electric and magnetic fields of the radio waves.
"We see for the first time that a jet from a young star shares this common characteristic with the other types of cosmic jets," said Luis Rodriguez, of UNAM.
The discovery, the astronomers say, may allow them to gain an improved understanding of the physics of the jets as well as of the role magnetic fields play in forming new stars. The jets from young stars, unlike the other types, emit radiation that provides information on the temperatures, speeds, and densities within the jets. This information, combined with the data on magnetic fields, can improve scientists' understanding of how such jets work.
"In the future, combining several types of observations could give us an overall picture of how magnetic fields affect the young star and all its surroundings. This would be a big advance in understanding the process of star formation," Rodriguez said.
Carrasco-Gonzalez and Rodriguez worked with Guillem Anglada and Mayra Osorio of the Astrophysical Institute of Andalucia, Josep Marti of the University of Jaen in Spain, and Jose Torrelles of the University of Barcelona. The scientists reported their findings in the November 26 edition of Science.
Science & Technology
Kamis, 02 Desember 2010
Rabu, 01 Desember 2010
Making Stars: Studies Show How Cosmic Dust and Gas Shape Galaxy Evolution
This series of images shows a simulation of galaxy formation occurring early in the history of the universe. The simulation was performed by Fermilab’s Nickolay Gnedin and the University of Chicago’s Andrey Kravtsov at the National Center for Supercomputing Applications in Urbana–Champaign. Yellow dots are young stars. Blue fog shows the neutral gas. Red surface indicates molecular gas. The starry background has been added for aesthetic effect.
(Nick Gnedin)
(Nick Gnedin)
Astronomers find cosmic dust annoying when it blocks their view of the heavens, but without it the universe would be devoid of stars. Cosmic dust is the indispensable ingredient for making stars and for understanding how primordial diffuse gas clouds assemble themselves into full–blown galaxies.
“Formation of galaxies is one of the biggest remaining questions in astrophysics,” said Andrey Kravtsov, associate professor in astronomy & astrophysics at the University of Chicago.
Astrophysicists are moving closer to answering that question, thanks to a combination of new observations and supercomputer simulations, including those conducted by Kravtsov and Nick Gnedin, a physicist at Fermi National Accelerator Laboratory.
Gnedin and Kravtsov published new results based on their simulations in the May 1, 2010 issue of The Astrophysical Journal, explaining why stars formed more slowly in the early history of the universe than they did much later. The paper quickly came to the attention of Robert C. Kennicutt Jr., director of the University of Cambridge’s Institute of Astronomy and co–discoverer of one of the key observational findings about star formation in galaxies, known as the Kennicutt–Schmidt relation.
In the June 3, 2010 issue of Nature, Kennicutt noted that the recent spate of observations and theoretical simulations bodes well for the future of astrophysics. In their Astrophysical Journal paper, Kennicutt wrote, “Gnedin and Kravtsov take a significant step in unifying these observations and simulations, and provide a prime illustration of the recent progress in the subject as a whole.”
Star–formation law
Kennicutt’s star–formation law relates the amount of gas in galaxies in a given area to the rate at which it turns into stars over the same area. The relation has been quite useful when applied to galaxies observed late in the history of the universe, but recent observations by Arthur Wolfe of the University of California, San Diego, and Hsiao–Wen Chen, assistant professor in astronomy and astrophysics at UChicago, indicate that the relation fails for galaxies observed during the first two billion years following the big bang.
Gnedin and Kravtsov’s work successfully explains why. “What it shows is that at early stages of evolution, galaxies were much less efficient in converting their gas into stars,” Kravtsov said.
Stellar evolution leads to increasing abundance of dust, as stars produce elements heavier than helium, including carbon, oxygen, and iron, which are key elements in dust particles.
“Early on, galaxies didn’t have enough time to produce a lot of dust, and without dust it’s very difficult to form these stellar nurseries,” Kravtsov said. “They don’t convert the gas as efficiently as galaxies today, which are already quite dusty.”
The star–formation process begins when interstellar gas clouds become increasingly dense. At some point the hydrogen and helium atoms start combining to form molecules in certain cold regions of these clouds. A hydrogen molecule forms when two hydrogen atoms join. They do so inefficiently in empty space, but find each other more readily on the surface of a cosmic dust particle.
“The biggest particles of cosmic dust are like the smallest particles of sand on good beaches in Hawaii,” Gnedin said.
These hydrogen molecules are fragile and easily destroyed by the intense ultraviolet light emitted from massive young stars. But in some galactic regions dark clouds, so–called because of the dust they contain, form a protective layer that protects the hydrogen molecules from the destructive light of other stars.
Stellar nurseries
“I like to think about stars as being very bad parents, because they provide a bad environment for the next generation,” Gnedin joked. The dust therefore provides a protective environment for stellar nurseries, Kravtsov noted.
“There is a simple connection between the presence of dust in this diffuse gas and its ability to form stars, and that’s something that we modeled for the first time in these galaxy–formation simulations,” Kravtsov said. “It’s very plausible, but we don’t know for sure that that’s exactly what’s happening.”
The Gnedin–Kravtsov model also provides a natural explanation for why spiral galaxies predominately fill the sky today, and why small galaxies form stars slowly and inefficiently.
“We usually see very thin disks, and those types of systems are very difficult to form in galaxy–formation simulations,” Kravtsov said.
That’s because astrophysicists have assumed that galaxies formed gradually through a series of collisions. The problem: simulations show that when galaxies merge, they form spheroidal structures that look more elliptical than spiral.
But early in the history of the universe, cosmic gas clouds were inefficient at making stars, so they collided before star formation occurred. “Those types of mergers can create a thin disk,” Kravtsov said.
As for small galaxies, their lack of dust production could account for their inefficient star formation. “All of these separate pieces of evidence that existed somehow all fell into one place,” Gnedin observed. “That’s what I like as a physicist because physics, in general, is an attempt to understand unifying principles behind different phenomena.”
More work remains to be done, however, with input from newly arrived postdoctoral fellows at UChicago and more simulations to be performed on even more powerful supercomputers. “That’s the next step,” Gnedin said.
UH Physicists Study Behavior of Enzyme Linked to Alzheimer's, Cancer
Margaret Cheung, assistant professor of physics at UH, and Antonios Samiotakis, a physics Ph.D. student, described their findings in a paper titled “Structure, function, and folding of phosphoglycerate kinase (PGK) are strongly perturbed by macromolecular crowding,” published in a recent issue of the journal Proceedings of the National Academy of Sciences, one of the world’s most-cited multidisciplinary scientific serials. The research was funded by a nearly $224,000 National Science Foundation grant in support of Samiotakis’ dissertation.
“Imagine you’re walking down the aisle toward an exit after a movie in a crowded theatre. The pace of your motion would be slowed down by the moving crowd and narrow space between the aisles. However, you can still maneuver your arm, stretch out and pat your friend on the shoulder who slept through the movie,” Cheung said. “This can be the same environment inside a crowded cell from the viewpoint of a protein, the workhorse of all living systems. Proteins always ‘talk’ to each other inside cells, and they pass information about what happens to the cell and how to respond promptly. Failure to do so may cause uncontrollable cell growth that leads to cancer or cause malfunction of a cell that leads to Alzheimer’s disease. Understanding a protein inside cells – in terms of structures and enzymatic activity – is important to shed light on preventing, managing or curing these diseases at a molecular level.”
Cheung, a theoretical physicist, and Martin Gruebele, her experimental collaborator at the University of Illinois at Urbana-Champaign, led a team that unlocked this mystery. Studying the PGK enzyme, Cheung used computer models that simulate the environment inside a cell. Biochemists typically study proteins in water, but such test tube research is limited because it cannot gauge how a protein actually functions inside a crowded cell, where it can interact with DNA, ribosomes and other molecules.
The PGK enzyme plays a key role in the process of glycolysis, which is the metabolic breakdown of glucose and other sugars that releases energy in the form of ATP. ATP molecules are basically like packets of fuel that power biological molecular motors. This conversion of food to energy is present in every organism, from yeast to humans. Malfunction of the glycolytic pathway has been linked to Alzheimer’s disease and cancer. Patients with reduced metabolic rates in the brain have been found to be at risk for Alzheimer’s disease, while out-of-control metabolic rates are believed to fuel the growth of malignant tumor cells.
Scientists had previously believed that a PGK enzyme shaped like Pac-Man had to undergo a dynamic hinge motion to perform its metabolic function. However, in the computer models mimicking the cell interior, Cheung found that the enzyme was already functioning in its closed Pac-Man state in the jam-packed surrounding. In fact, the enzyme was 15 times more active in the tight spaces of a crowded cell. This shows that in cell-like conditions the function of a protein is more active and efficient than in a dilute condition, such as a test tube. This finding can drastically transform how scientists view proteins and their behavior when the environment of a cell is taken into account.
“This work deepens researchers’ understanding of how proteins function, or don’t function, in real cell conditions,” Samiotakis said. “By understanding the impact of a crowded cell on the structure, dynamics of proteins can help researchers design efficient therapeutic means that will work better inside cells, with the goal to prevent diseases and improve human health.”
Cheung and Samiotakis’ computer simulations – performed using the supercomputers at the Texas Learning and Computation Center (TLC2) – were coupled with in vitro experiments by Gruebele and his team. Using the high-performance computing resources of TLC2 factored significantly in the success of their work.
“Picture having a type of medicine that can precisely recognize and target a key that causes Alzheimer’s or cancer inside a crowded cell. Envision, then, the ability to switch a sick cell like this back to its healthy form of interaction at a molecular level,” Cheung said. “This may become a reality in the near future. Our lab at UH is working toward that vision.”
“Imagine you’re walking down the aisle toward an exit after a movie in a crowded theatre. The pace of your motion would be slowed down by the moving crowd and narrow space between the aisles. However, you can still maneuver your arm, stretch out and pat your friend on the shoulder who slept through the movie,” Cheung said. “This can be the same environment inside a crowded cell from the viewpoint of a protein, the workhorse of all living systems. Proteins always ‘talk’ to each other inside cells, and they pass information about what happens to the cell and how to respond promptly. Failure to do so may cause uncontrollable cell growth that leads to cancer or cause malfunction of a cell that leads to Alzheimer’s disease. Understanding a protein inside cells – in terms of structures and enzymatic activity – is important to shed light on preventing, managing or curing these diseases at a molecular level.”
Cheung, a theoretical physicist, and Martin Gruebele, her experimental collaborator at the University of Illinois at Urbana-Champaign, led a team that unlocked this mystery. Studying the PGK enzyme, Cheung used computer models that simulate the environment inside a cell. Biochemists typically study proteins in water, but such test tube research is limited because it cannot gauge how a protein actually functions inside a crowded cell, where it can interact with DNA, ribosomes and other molecules.
The PGK enzyme plays a key role in the process of glycolysis, which is the metabolic breakdown of glucose and other sugars that releases energy in the form of ATP. ATP molecules are basically like packets of fuel that power biological molecular motors. This conversion of food to energy is present in every organism, from yeast to humans. Malfunction of the glycolytic pathway has been linked to Alzheimer’s disease and cancer. Patients with reduced metabolic rates in the brain have been found to be at risk for Alzheimer’s disease, while out-of-control metabolic rates are believed to fuel the growth of malignant tumor cells.
Scientists had previously believed that a PGK enzyme shaped like Pac-Man had to undergo a dynamic hinge motion to perform its metabolic function. However, in the computer models mimicking the cell interior, Cheung found that the enzyme was already functioning in its closed Pac-Man state in the jam-packed surrounding. In fact, the enzyme was 15 times more active in the tight spaces of a crowded cell. This shows that in cell-like conditions the function of a protein is more active and efficient than in a dilute condition, such as a test tube. This finding can drastically transform how scientists view proteins and their behavior when the environment of a cell is taken into account.
“This work deepens researchers’ understanding of how proteins function, or don’t function, in real cell conditions,” Samiotakis said. “By understanding the impact of a crowded cell on the structure, dynamics of proteins can help researchers design efficient therapeutic means that will work better inside cells, with the goal to prevent diseases and improve human health.”
Cheung and Samiotakis’ computer simulations – performed using the supercomputers at the Texas Learning and Computation Center (TLC2) – were coupled with in vitro experiments by Gruebele and his team. Using the high-performance computing resources of TLC2 factored significantly in the success of their work.
“Picture having a type of medicine that can precisely recognize and target a key that causes Alzheimer’s or cancer inside a crowded cell. Envision, then, the ability to switch a sick cell like this back to its healthy form of interaction at a molecular level,” Cheung said. “This may become a reality in the near future. Our lab at UH is working toward that vision.”
Bacteria Use ‘Toxic Darts' to Disable Each Other, According to UCSB Scientists
(Santa Barbara, Calif.) –– In nature, it's a dog-eat-dog world, even in the realm of bacteria. Competing
Click for downloadable imageStephanie K. Aoki (front)
Elie J. Diner, David Low,
Christopher Hayes
(back, left to right)
credit: George Foulsham,
Office of Public Affairs, UCSB
Click for downloadable imageIllustration of contact dependent
growth inhibition (CDI)
credit: Stephanie K. Aoki
Click for downloadable imageImage shows
CDI+ E. coli bacteria (green)
interacting with target bacteria
lacking a CDI system (red)
credit: Stephanie K. Aoki bacteria use "toxic darts" to disable each other, according to a new study by UC Santa Barbara biologists. Their research is published in the journal Nature.
"The discovery of toxic darts could eventually lead to new ways to control disease-causing pathogens," said Stephanie K. Aoki, first author and postdoctoral fellow in UCSB's Department of Molecular, Cellular, and Developmental Biology (MCDB). "This is important because resistance to antibiotics is on the rise."
Second author Elie J. Diner, a graduate student in biomolecular sciences and engineering, said: "First we need to learn the rules of this bacterial combat. It turns out that there are many ways to kill your neighbors; bacteria carry a wide range of toxic darts."
The scientists studied many bacterial species, including some important pathogens. They found that bacterial cells have stick-like proteins on their surfaces, with toxic dart tips. These darts are delivered to competing neighbor cells when the bacteria touch. This process of touching and injecting a toxic dart is called "contact dependent growth inhibition," or CDI.
Some targets have a biological shield. Bacteria protected by an immunity protein can resist the enemy's disabling toxic darts. This immunity protein is called "contact dependent growth inhibition immunity." The protein inactivates the toxic dart.
The UCSB team discovered a wide variety of potential toxic-tip proteins carried by bacteria cells –– nearly 50 distinct types have been identified so far, according to Christopher Hayes, co-author an associate professor at MCDB. Each bacterial cell must also have immunity to its own toxic dart. Otherwise, carrying the ammunition would cause cell suicide.
Surprisingly, when a bacterial cell is attacked –– and has no immunity protein –– it may not die. However, it often ceases to grow. The cell is inactivated, inhibited from growth. Similarly, many antibiotics do not kill bacteria; they only prevent the bacteria from growing. Then the body flushes out the dormant cells.
Some toxic tips appear to function inside the targeted bacteria by cutting up enemy RNA so the cell can no longer synthesize protein and grow. Other toxic tips operate by cutting up enemy DNA, which prevents replication of the cell.
"Our data indicate that CDI systems are also present in a broad range of bacteria, including important plant and animal pathogens such as E. coli which causes urinary tract infections, and Yersinia species, including the causative agent of plague," said senior author David Low, professor of MCDB. "Bacteria may be using these systems to compete with one another in the soil, on plants, and in animals. It's an amazingly diverse world."
The team studied the bacteria responsible for soft rot in potatoes, called Dickeya dadantii. This bacteria also invades chicory leaves, chrisanthemums, and other vegetables and plants.
Funding for this research came from the National Science Foundation and the National Institutes of Health. The TriCounty Blood Bank also provided funding.
The research was performed in the Low and Hayes lab in MCDB. Important contributions were made Stephen J. Poole, associate professor in MCDB, and by Peggy Cotter's lab when she was with MCDB. Cotter has since moved to the University of North Carolina School of Medicine. Other co-authors include Claire t'Kint de Roodenbeke, research associate; Brandt R. Burgess, postdoctoral fellow; Bruce A. Braaten, research scientist; Alison M. Jones, technician; and Julia S. Webb, graduate student.
Click for downloadable image
Elie J. Diner, David Low,
Christopher Hayes
(back, left to right)
credit: George Foulsham,
Office of Public Affairs, UCSB
Click for downloadable image
growth inhibition (CDI)
credit: Stephanie K. Aoki
Click for downloadable image
CDI+ E. coli bacteria (green)
interacting with target bacteria
lacking a CDI system (red)
credit: Stephanie K. Aoki
"The discovery of toxic darts could eventually lead to new ways to control disease-causing pathogens," said Stephanie K. Aoki, first author and postdoctoral fellow in UCSB's Department of Molecular, Cellular, and Developmental Biology (MCDB). "This is important because resistance to antibiotics is on the rise."
Second author Elie J. Diner, a graduate student in biomolecular sciences and engineering, said: "First we need to learn the rules of this bacterial combat. It turns out that there are many ways to kill your neighbors; bacteria carry a wide range of toxic darts."
The scientists studied many bacterial species, including some important pathogens. They found that bacterial cells have stick-like proteins on their surfaces, with toxic dart tips. These darts are delivered to competing neighbor cells when the bacteria touch. This process of touching and injecting a toxic dart is called "contact dependent growth inhibition," or CDI.
Some targets have a biological shield. Bacteria protected by an immunity protein can resist the enemy's disabling toxic darts. This immunity protein is called "contact dependent growth inhibition immunity." The protein inactivates the toxic dart.
The UCSB team discovered a wide variety of potential toxic-tip proteins carried by bacteria cells –– nearly 50 distinct types have been identified so far, according to Christopher Hayes, co-author an associate professor at MCDB. Each bacterial cell must also have immunity to its own toxic dart. Otherwise, carrying the ammunition would cause cell suicide.
Surprisingly, when a bacterial cell is attacked –– and has no immunity protein –– it may not die. However, it often ceases to grow. The cell is inactivated, inhibited from growth. Similarly, many antibiotics do not kill bacteria; they only prevent the bacteria from growing. Then the body flushes out the dormant cells.
Some toxic tips appear to function inside the targeted bacteria by cutting up enemy RNA so the cell can no longer synthesize protein and grow. Other toxic tips operate by cutting up enemy DNA, which prevents replication of the cell.
"Our data indicate that CDI systems are also present in a broad range of bacteria, including important plant and animal pathogens such as E. coli which causes urinary tract infections, and Yersinia species, including the causative agent of plague," said senior author David Low, professor of MCDB. "Bacteria may be using these systems to compete with one another in the soil, on plants, and in animals. It's an amazingly diverse world."
The team studied the bacteria responsible for soft rot in potatoes, called Dickeya dadantii. This bacteria also invades chicory leaves, chrisanthemums, and other vegetables and plants.
Funding for this research came from the National Science Foundation and the National Institutes of Health. The TriCounty Blood Bank also provided funding.
The research was performed in the Low and Hayes lab in MCDB. Important contributions were made Stephen J. Poole, associate professor in MCDB, and by Peggy Cotter's lab when she was with MCDB. Cotter has since moved to the University of North Carolina School of Medicine. Other co-authors include Claire t'Kint de Roodenbeke, research associate; Brandt R. Burgess, postdoctoral fellow; Bruce A. Braaten, research scientist; Alison M. Jones, technician; and Julia S. Webb, graduate student.
Antihydrogen Trapped for First Time
BERKELEY — Physicists working at the European Organization for Nuclear Research (CERN) in Geneva, Switzerland, have succeeded in trapping antihydrogen — the antimatter equivalent of the hydrogen atom — a milestone that could soon lead to experiments on a form of matter that disappeared mysteriously shortly after the birth of the universe 14 billion years ago.
The ALPHA (Antihydrogen Laser PHysics Apparatus) experiment, an international collaboration that includes physicists from the University of California, Berkeley, and Lawrence Berkeley National Laboratory (LBNL), has now trapped 38 antihydrogen atoms, each for more than one-tenth of a second.
While the number and lifetime are insufficient to threaten the Vatican — in the 2000 novel and 2009 movie "Angels & Demons," a hidden vat of potentially explosive antihydrogen was buried under St. Peter's Basilica in Rome — it is a starting point for learning new physics, the researchers said.
"We are getting close to the point at which we can do some classes of experiments on the properties of antihydrogen," said Joel Fajans, UC Berkeley professor of physics, LBNL faculty scientist and ALPHA team member. "Initially, these will be crude experiments to test CPT symmetry, but since no one has been able to make these types of measurements on antimatter atoms at all, it's a good start."
CPT (charge-parity-time) symmetry is the hypothesis that physical interactions look the same if you flip the charge of all particles, change their parity — that is, invert their coordinates in space — and reverse time. Any differences between antihydrogen and hydrogen, such as differences in their atomic spectrum, automatically violate CPT, overthrow today's "standard model" of particles and their interactions, and may explain why antimatter, created in equal amounts during the universe's birth, is largely absent today.
The team's results were published online Nov. 17 in advance of print publication in the British journal Nature.
Antimatter, first predicted by physicist Paul Dirac in 1931, has the opposite charge of normal matter and annihilates completely in a flash of energy upon interaction with normal matter. While astronomers see no evidence of significant antimatter annihilation in space, antimatter is produced during high-energy particle interactions on earth and in some decays of radioactive elements. UC Berkeley physicists Emilio Segre and Owen Chamberlain created antiprotons in the Bevatron accelerator at the Lawrence Radiation Laboratory, now LBNL, in 1955, confirming their existence and earning the scientists the 1959 Nobel Prize in physics.
Slow antihydrogen was produced at CERN in 2002 thanks to an antiproton decelerator that slowed antiprotons enough for them to be used in experiments that combined them with a cloud of positrons. The ATHENA experiment, a broad international collaboration, reported the first detection of cold antihydrogen, with the rival ATRAP experiment close behind.
The ATHENA experiment closed down in 2004, to be superseded by ALPHA, coordinated by Jeffrey Hangst of the University of Aarhus in Denmark. Since then, the ALPHA and ATRAP teams have competed to trap antihydrogen for experiments, in particular, laser experiments to measure the antihydrogen spectrum (the color with which it glows) — and gravity measurements. Before the recent results, the CERN experiments have produced — only fleetingly — tens of millions of antihydrogen atoms, Fajans said.
ALPHA's approach was to cool antiprotons and compress them into a matchstick-size cloud (20 millimeters long and 1.4 millimeters in diameter). Then, using autoresonance, a technique developed by UC Berkeley visiting professor Lazar Friedland and first explored in plasmas by Fajans and former U.C Berkeley graduate student Erik Gilson, the cloud of cold, compressed antiprotons is nudged to overlap a like-size positron cloud, where the two particles mate to form antihydrogen.
"For the moment, we keep antihydrogen atoms around for at least 172 milliseconds — about a sixth of a second — long enough to make sure we have trapped them," said colleague Jonathan Wurtele, UC Berkeley professor of physics and LBNL faculty scientist. Wurtele collaborated with LBNL visitor Katia Gomberoff, staff members Alex Friedman, David Grote and Jean-Luc Vay and with Fajans to simulate the new and original magnetic configurations.
Trapping antihydrogen isn't easy, Fajans said, because it is a neutral, or chargeless, particle. Magnetic bottles are generally used to trap charged particles, such as ionized atoms. These charged particles spiral along magnetic field lines until they encounter an electric field that bounces them back towards the center of the bottle.
Neutral antihydrogen, however, would normally be unaffected by these fields. But the team takes advantage of the tiny magnetic moment of the antihydrogen atom to trap it using a steeply increasing field — a so-called magnetic mirror — that reflects them backward toward the center. Because the magnetic moment is so small, the antihydrogen has to be very cold: less than about one-half degree above absolute zero (0.5 Kelvin). That means the team had to slow down the antiprotons by a factor of one hundred billion from their initial energy emerging from the antiproton decelerator.
Once trapped, the experimenters sweep out the lingering antiprotons with an electric field, then shut off the mirror fields and let the trapped antihydrogen atoms annihilate with normal matter. Surrounding detectors are sensitive to the charged pions that result from the proton-antiproton annihilation. Cosmic rays can also set off the detector, but their straight-line tracks can be easily distinguished, Fajans said. A few antiprotons could potentially remain in the trap, and their annihilations would look similar to those of antihydrogen, but the physicists' simulations show that such events can also be successfully distinguished from antihydrogen annihilations.
During August and September of 2010, the team detected an antihydrogen atom in 38 of the 335 cycles of antiproton injection. Given that their detector efficiency is about 50 percent, the team calculated that it captured approximately 80 of the several million antihydrogen atoms produced during these cycles. Experiments in 2009 turned up six candidate antihydrogen atoms, but they have not been confirmed.
ALPHA continues to detect antihydrogen atoms at an increasing rate as the experimenters learn how to better tune their experiment, Fajans said.
Of the 42 co-authors of the new paper, 10 are or were affiliated with UC Berkeley: Fajans; Wurtele; current graduate students Marcelo Baquero-Ruiz, Steve Chapman, Alex Povilus and Chukman So; former graduate student Will Bertsche; former sabbatical visitor Eli Sarid; and past visitors Daniel Silveira and Dirk van der Werf. Other UC Berkeley contributors to the research are former undergraduates Crystal Bray, Patrick Ko and Korana Burke, and former graduate student Erik Gilson. Other LBNL contributors include Alex Friedman, David Grote, Jean-Luc Vay and former visiting scientists Katia Gomberoff and Alon Deutsch.
(Katie Bertsche)
The first artificially produced low energy antihydrogen atoms — consisting of a positron, or antimatter electron, orbiting an antiproton nucleus — were created at CERN in 2002, but until now the atoms have struck normal matter and annihilated in a flash of gamma-rays within microseconds of creation.The ALPHA (Antihydrogen Laser PHysics Apparatus) experiment, an international collaboration that includes physicists from the University of California, Berkeley, and Lawrence Berkeley National Laboratory (LBNL), has now trapped 38 antihydrogen atoms, each for more than one-tenth of a second.
While the number and lifetime are insufficient to threaten the Vatican — in the 2000 novel and 2009 movie "Angels & Demons," a hidden vat of potentially explosive antihydrogen was buried under St. Peter's Basilica in Rome — it is a starting point for learning new physics, the researchers said.
"We are getting close to the point at which we can do some classes of experiments on the properties of antihydrogen," said Joel Fajans, UC Berkeley professor of physics, LBNL faculty scientist and ALPHA team member. "Initially, these will be crude experiments to test CPT symmetry, but since no one has been able to make these types of measurements on antimatter atoms at all, it's a good start."
CPT (charge-parity-time) symmetry is the hypothesis that physical interactions look the same if you flip the charge of all particles, change their parity — that is, invert their coordinates in space — and reverse time. Any differences between antihydrogen and hydrogen, such as differences in their atomic spectrum, automatically violate CPT, overthrow today's "standard model" of particles and their interactions, and may explain why antimatter, created in equal amounts during the universe's birth, is largely absent today.
The team's results were published online Nov. 17 in advance of print publication in the British journal Nature.
Antimatter, first predicted by physicist Paul Dirac in 1931, has the opposite charge of normal matter and annihilates completely in a flash of energy upon interaction with normal matter. While astronomers see no evidence of significant antimatter annihilation in space, antimatter is produced during high-energy particle interactions on earth and in some decays of radioactive elements. UC Berkeley physicists Emilio Segre and Owen Chamberlain created antiprotons in the Bevatron accelerator at the Lawrence Radiation Laboratory, now LBNL, in 1955, confirming their existence and earning the scientists the 1959 Nobel Prize in physics.
Slow antihydrogen was produced at CERN in 2002 thanks to an antiproton decelerator that slowed antiprotons enough for them to be used in experiments that combined them with a cloud of positrons. The ATHENA experiment, a broad international collaboration, reported the first detection of cold antihydrogen, with the rival ATRAP experiment close behind.
The ATHENA experiment closed down in 2004, to be superseded by ALPHA, coordinated by Jeffrey Hangst of the University of Aarhus in Denmark. Since then, the ALPHA and ATRAP teams have competed to trap antihydrogen for experiments, in particular, laser experiments to measure the antihydrogen spectrum (the color with which it glows) — and gravity measurements. Before the recent results, the CERN experiments have produced — only fleetingly — tens of millions of antihydrogen atoms, Fajans said.
ALPHA's approach was to cool antiprotons and compress them into a matchstick-size cloud (20 millimeters long and 1.4 millimeters in diameter). Then, using autoresonance, a technique developed by UC Berkeley visiting professor Lazar Friedland and first explored in plasmas by Fajans and former U.C Berkeley graduate student Erik Gilson, the cloud of cold, compressed antiprotons is nudged to overlap a like-size positron cloud, where the two particles mate to form antihydrogen.
All this happens inside a magnetic bottle that traps the antihydrogen atoms. The magnetic trap is a specially configured magnetic field that Fajans and then-UC Berkeley undergraduate Andrea Schmidt first proposed, using an unusual and expensive octupole superconducting magnet to create a more stable plasma.
"For the moment, we keep antihydrogen atoms around for at least 172 milliseconds — about a sixth of a second — long enough to make sure we have trapped them," said colleague Jonathan Wurtele, UC Berkeley professor of physics and LBNL faculty scientist. Wurtele collaborated with LBNL visitor Katia Gomberoff, staff members Alex Friedman, David Grote and Jean-Luc Vay and with Fajans to simulate the new and original magnetic configurations.
Trapping antihydrogen isn't easy, Fajans said, because it is a neutral, or chargeless, particle. Magnetic bottles are generally used to trap charged particles, such as ionized atoms. These charged particles spiral along magnetic field lines until they encounter an electric field that bounces them back towards the center of the bottle.
Neutral antihydrogen, however, would normally be unaffected by these fields. But the team takes advantage of the tiny magnetic moment of the antihydrogen atom to trap it using a steeply increasing field — a so-called magnetic mirror — that reflects them backward toward the center. Because the magnetic moment is so small, the antihydrogen has to be very cold: less than about one-half degree above absolute zero (0.5 Kelvin). That means the team had to slow down the antiprotons by a factor of one hundred billion from their initial energy emerging from the antiproton decelerator.
Once trapped, the experimenters sweep out the lingering antiprotons with an electric field, then shut off the mirror fields and let the trapped antihydrogen atoms annihilate with normal matter. Surrounding detectors are sensitive to the charged pions that result from the proton-antiproton annihilation. Cosmic rays can also set off the detector, but their straight-line tracks can be easily distinguished, Fajans said. A few antiprotons could potentially remain in the trap, and their annihilations would look similar to those of antihydrogen, but the physicists' simulations show that such events can also be successfully distinguished from antihydrogen annihilations.
During August and September of 2010, the team detected an antihydrogen atom in 38 of the 335 cycles of antiproton injection. Given that their detector efficiency is about 50 percent, the team calculated that it captured approximately 80 of the several million antihydrogen atoms produced during these cycles. Experiments in 2009 turned up six candidate antihydrogen atoms, but they have not been confirmed.
ALPHA continues to detect antihydrogen atoms at an increasing rate as the experimenters learn how to better tune their experiment, Fajans said.
Of the 42 co-authors of the new paper, 10 are or were affiliated with UC Berkeley: Fajans; Wurtele; current graduate students Marcelo Baquero-Ruiz, Steve Chapman, Alex Povilus and Chukman So; former graduate student Will Bertsche; former sabbatical visitor Eli Sarid; and past visitors Daniel Silveira and Dirk van der Werf. Other UC Berkeley contributors to the research are former undergraduates Crystal Bray, Patrick Ko and Korana Burke, and former graduate student Erik Gilson. Other LBNL contributors include Alex Friedman, David Grote, Jean-Luc Vay and former visiting scientists Katia Gomberoff and Alon Deutsch.
Physicists Demonstrate a Four-Fold Quantum Memory
Caltech Physicists Demonstrate a Four-Fold Quantum Memory
Related Links:
Their work, described in the November 18 issue of the journal Nature, also demonstrated a quantum interface between the atomic memories—which represent something akin to a computer "hard drive" for entanglement—and four beams of light, thereby enabling the four-fold entanglement to be distributed by photons across quantum networks. The research represents an important achievement in quantum information science by extending the coherent control of entanglement from two to multiple (four) spatially separated physical systems of matter and light.
The proof-of-principle experiment, led by William L. Valentine Professor and professor of physics H. Jeff Kimble, helps to pave the way toward quantum networks. Similar to the Internet in our daily life, a quantum network is a quantum "web" composed of many interconnected quantum nodes, each of which is capable of rudimentary quantum logic operations (similar to the "AND" and "OR" gates in computers) utilizing "quantum transistors" and of storing the resulting quantum states in quantum memories. The quantum nodes are "wired" together by quantum channels that carry, for example, beams of photons to deliver quantum information from node to node. Such an interconnected quantum system could function as a quantum computer, or, as proposed by the late Caltech physicist Richard Feynman in the 1980s, as a "quantum simulator" for studying complex problems in physics.
Quantum entanglement is a quintessential feature of the quantum realm and involves correlations among components of the overall physical system that cannot be described by classical physics. Strangely, for an entangled quantum system, there exists no objective physical reality for the system's properties. Instead, an entangled system contains simultaneously multiple possibilities for its properties. Such an entangled system has been created and stored by the Caltech researchers.
Previously, Kimble's group entangled a pair of atomic quantum memories and coherently transferred the entangled photons into and out of the quantum memories (http://media.caltech.edu/press_releases/13115). For such two-component—or bipartite—entanglement, the subsystems are either entangled or not. But for multi-component entanglement with more than two subsystems—or multipartite entanglement—there are many possible ways to entangle the subsystems. For example, with four subsystems, all of the possible pair combinations could be bipartite entangled but not be entangled over all four components; alternatively, they could share a "global" quadripartite (four-part) entanglement.
Hence, multipartite entanglement is accompanied by increased complexity in the system. While this makes the creation and characterization of these quantum states substantially more difficult, it also makes the entangled states more valuable for tasks in quantum information science.
[Credit: Nature/Caltech/Akihisa Goban]
The technique employed by the Caltech team for creating quadripartite entanglement is an extension of the theoretical work of Luming Duan, Mikhail Lukin, Ignacio Cirac, and Peter Zoller in 2001 for the generation of bipartite entanglement by the act of quantum measurement. This kind of "measurement-induced" entanglement for two atomic ensembles was first achieved by the Caltech group in 2005 (http://media.caltech.edu/press_releases/12776).
In the current experiment, entanglement was "stored" in the four atomic ensembles for a variable time, and then "read out"—essentially, transferred—to four beams of light. To do this, the researchers shot four "read" lasers into the four, now-entangled, ensembles. The coherent arrangement of excitation amplitudes for the atoms in the ensembles, described by spin waves, enhances the matter–light interaction through a phenomenon known as superradiant emission.
"The emitted light from each atom in an ensemble constructively interferes with the light from other atoms in the forward direction, allowing us to transfer the spin wave excitations of the ensembles to single photons," says Akihisa Goban, a Caltech graduate student and coauthor of the paper. The researchers were therefore able to coherently move the quantum information from the individual sets of multipartite entangled atoms to four entangled beams of light, forming the bridge between matter and light that is necessary for quantum networks.
The Caltech team investigated the dynamics by which the multipartite entanglement decayed while stored in the atomic memories. "In the zoology of entangled states, our experiment illustrates how multipartite entangled spin waves can evolve into various subsets of the entangled systems over time, and sheds light on the intricacy and fragility of quantum entanglement in open quantum systems," says Caltech graduate student Kyung Soo Choi, the lead author of the Nature paper. The researchers suggest that the theoretical tools developed for their studies of the dynamics of entanglement decay could be applied for studying the entangled spin waves in quantum magnets.
Further possibilities of their experiment include the expansion of multipartite entanglement across quantum networks and quantum metrology. "Our work introduces new sets of experimental capabilities to generate, store, and transfer multipartite entanglement from matter to light in quantum networks," Choi explains. "It signifies the ever-increasing degree of exquisite quantum control to study and manipulate entangled states of matter and light."
In addition to Kimble, Choi, and Goban, the other authors of the paper, "Entanglement of spin waves among four quantum memories," are Scott Papp, a former postdoctoral scholar in the Caltech Center for the Physics of Information now at the National Institute of Standards and Technology in Boulder, Colorado, and Steven van Enk, a theoretical collaborator and professor of physics at the University of Oregon, and an associate of the Institute for Quantum Information at Caltech.
This research was funded by the National Science Foundation, the National Security Science and Engineering Faculty Fellowship program at the U.S. Department of Defense (DOD), the Northrop Grumman Corporation, and the Intelligence Advanced Research Projects Activity.
Email this pagePrint this page Bookmark and Share News From the Field Pushing Black-hole Mergers to the Extreme: RIT Scientists Achieve 100:1 Mass Ratio
‘David and Goliath’ scenario explores extreme mass ratios (Goliath wins)
Until now, the problem of simulating the merger of binary black holes with extreme size differences had remained an unexplored region of black-hole physics.
“Nature doesn’t collide black holes of equal masses,” says Carlos Lousto, associate professor of mathematical sciences at Rochester Institute of Technology and a member of the Center for Computational Relativity and Gravitation. “They have mass ratios of 1:3, 1:10, 1:100 or even 1:1 million. This puts us in a better situation for simulating realistic astrophysical scenarios and for predicting what observers should see and for telling them what to look for.
“Leaders in the field believed solving the 100:1 mass ratio problem would take five to 10 more years and significant advances in computational power. It was thought to be technically impossible.”
“These simulations were made possible by advances both in the scaling and performance of relativity computer codes on thousands of processors, and advances in our understanding of how gauge conditions can be modified to self-adapt to the vastly different scales in the problem,” adds Yosef Zlochower, assistant professor of mathematical sciences and a member of the center.
A paper announcing Lousto and Zlochower’s findings was submitted for publication in Physical Review Letters.
The only prior simulation describing an extreme merger of black holes focused on a scenario involving a 1:10 mass ratio. Those techniques could not be expanded to a bigger scale, Lousto explained. To handle the larger mass ratios, he and Zlochower developed numerical and analytical techniques based on the moving puncture approach—a breakthrough, created with Manuela Campanelli, director of the Center for Computational Relativity and Gravitation, that led to one of the first simulations of black holes on supercomputers in 2005.
The flexible techniques Lousto and Zlochower advanced for this scenario also translate to spinning binary black holes and for cases involving smaller mass ratios. These methods give the scientists ways to explore mass ratio limits and for modeling observational effects.
Lousto and Zlochower used resources at the Texas Advanced Computer Center, home to the Ranger supercomputer, to process the massive computations. The computer, which has 70,000 processors, took nearly three months to complete the simulation describing the most extreme-mass-ratio merger of black holes to date.
“Their work is pushing the limit of what we can do today,” Campanelli says. “Now we have the tools to deal with a new system.”
Simulations like Lousto and Zlochower’s will help observational astronomers detect mergers of black holes with large size differentials using the future Advanced LIGO (Laser Interferometer Gravitational-wave Observatory) and the space probe LISA (Laser Interferometer Space Antenna). Simulations of black-hole mergers provide blueprints or templates for observational scientists attempting to discern signatures of massive collisions. Observing and measuring gravitational waves created when black holes coalesce could confirm a key prediction of Einstein’s general theory of relativity.
Sabtu, 20 November 2010
Nanogenerators Grow Strong Enough to Power Small Conventional ElectronicsNanogenerators Grow Strong Enough to Power Small Conventional Electronics
Blinking numbers on a liquid-crystal display (LCD) often indicate that a device’s clock needs resetting. But in the laboratory of Zhong Lin Wang at Georgia Tech, the blinking number on a small LCD signals the success of a five-year effort to power conventional electronic devices with nanoscale generators that harvest mechanical energy from the environment using an array of tiny nanowires.
In this case, the mechanical energy comes from compressing a nanogenerator between two fingers, but it could also come from a heartbeat, the pounding of a hiker’s shoe on a trail, the rustling of a shirt, or the vibration of a heavy machine. While these nanogenerators will never produce large amounts of electricity for conventional purposes, they could be used to power nanoscale and microscale devices – and even to recharge pacemakers or iPods.
Wang’s nanogenerators rely on the piezoelectric effect seen in crystalline materials such as zinc oxide, in which an electric charge potential is created when structures made from the material are flexed or compressed. By capturing and combining the charges from millions of these nanoscale zinc oxide wires, Wang and his research team can produce as much as three volts – and up to 300 nanoamps.
“By simplifying our design, making it more robust and integrating the contributions from many more nanowires, we have successfully boosted the output of our nanogenerator enough to drive devices such as commercial liquid-crystal displays, light-emitting diodes and laser diodes,” said Wang, a Regents’ professor in Georgia Tech’s School of Materials Science and Engineering. “If we can sustain this rate of improvement, we will reach some true applications in healthcare devices, personal electronics, or environmental monitoring.”
Recent improvements in the nanogenerators, including a simpler fabrication technique, were reported online last week in the journal Nano Letters. Earlier papers in the same journal and in Nature Communications reported other advances for the work, which has been supported by the Defense Advanced Research Projects Agency (DARPA), the U.S. Department of Energy, the U.S. Air Force, and the National Science Foundation.
“We are interested in very small devices that can be used in applications such as health care, environmental monitoring and personal electronics,” said Wang. “How to power these devices is a critical issue.”
The earliest zinc oxide nanogenerators used arrays of nanowires grown on a rigid substrate and topped with a metal electrode. Later versions embedded both ends of the nanowires in polymer and produced power by simple flexing. Regardless of the configuration, the devices required careful growth of the nanowire arrays and painstaking assembly.
In the latest paper, Wang and his group members Youfan Hu, Yan Zhang, Chen Xu, Guang Zhu and Zetang Li reported on much simpler fabrication techniques. First, they grew arrays of a new type of nanowire that has a conical shape. These wires were cut from their growth substrate and placed into an alcohol solution.
The solution containing the nanowires was then dripped onto a thin metal electrode and a sheet of flexible polymer film. After the alcohol was allowed to dry, another layer was created. Multiple nanowire/polymer layers were built up into a kind of composite, using a process that Wang believes could be scaled up to industrial production.
When flexed, these nanowire sandwiches – which are about two centimeters by 1.5 centimeters – generated enough power to drive a commercial display borrowed from a pocket calculator.
Wang says the nanogenerators are now close to producing enough current for a self-powered system that might monitor the environment for a toxic gas, for instance, then broadcast a warning. The system would include capacitors able to store up the small charges until enough power was available to send out a burst of data.
While even the current nanogenerator output remains below the level required for such devices as iPods or cardiac pacemakers, Wang believes those levels will be reached within three to five years. The current nanogenerator, he notes, is nearly 100 times more powerful than what his group had developed just a year ago.
Writing in a separate paper published in October in the journal Nature Communications, group members Sheng Xu, Benjamin J. Hansen and Wang reported on a new technique for fabricating piezoelectric nanowires from lead zirconate titanate – also known as PZT. The material is already used industrially, but is difficult to grow because it requires temperatures of 650 degrees Celsius.
In the paper, Wang’s team reported the first chemical epitaxial growth of vertically-aligned single-crystal nanowire arrays of PZT on a variety of conductive and non-conductive substrates. They used a process known as hydrothermal decomposition, which took place at just 230 degrees Celsius.
With a rectifying circuit to convert alternating current to direct current, the researchers used the PZT nanogenerators to power a commercial laser diode, demonstrating an alternative materials system for Wang’s nanogenerator family. “This allows us the flexibility of choosing the best material and process for the given need, although the performance of PZT is not as good as zinc oxide for power generation,” he explained.
And in another paper published in Nano Letters, Wang and group members Guang Zhu, Rusen Yang and Sihong Wang reported on yet another advance boosting nanogenerator output. Their approach, called “scalable sweeping printing,” includes a two-step process of (1) transferring vertically-aligned zinc oxide nanowires to a polymer receiving substrate to form horizontal arrays and (2) applying parallel strip electrodes to connect all of the nanowires together.
Using a single layer of this structure, the researchers produced an open-circuit voltage of 2.03 volts and a peak output power density of approximately 11 milliwatts per cubic centimeter.
“From when we got started in 2005 until today, we have dramatically improved the output of our nanogenerators,” Wang noted. “We are within the range of what’s needed. If we can drive these small components, I believe we will be able to power small systems in the near future. In the next five years, I hope to see this move into application.”
In this case, the mechanical energy comes from compressing a nanogenerator between two fingers, but it could also come from a heartbeat, the pounding of a hiker’s shoe on a trail, the rustling of a shirt, or the vibration of a heavy machine. While these nanogenerators will never produce large amounts of electricity for conventional purposes, they could be used to power nanoscale and microscale devices – and even to recharge pacemakers or iPods.
Wang’s nanogenerators rely on the piezoelectric effect seen in crystalline materials such as zinc oxide, in which an electric charge potential is created when structures made from the material are flexed or compressed. By capturing and combining the charges from millions of these nanoscale zinc oxide wires, Wang and his research team can produce as much as three volts – and up to 300 nanoamps.
“By simplifying our design, making it more robust and integrating the contributions from many more nanowires, we have successfully boosted the output of our nanogenerator enough to drive devices such as commercial liquid-crystal displays, light-emitting diodes and laser diodes,” said Wang, a Regents’ professor in Georgia Tech’s School of Materials Science and Engineering. “If we can sustain this rate of improvement, we will reach some true applications in healthcare devices, personal electronics, or environmental monitoring.”
Recent improvements in the nanogenerators, including a simpler fabrication technique, were reported online last week in the journal Nano Letters. Earlier papers in the same journal and in Nature Communications reported other advances for the work, which has been supported by the Defense Advanced Research Projects Agency (DARPA), the U.S. Department of Energy, the U.S. Air Force, and the National Science Foundation.
“We are interested in very small devices that can be used in applications such as health care, environmental monitoring and personal electronics,” said Wang. “How to power these devices is a critical issue.”
The earliest zinc oxide nanogenerators used arrays of nanowires grown on a rigid substrate and topped with a metal electrode. Later versions embedded both ends of the nanowires in polymer and produced power by simple flexing. Regardless of the configuration, the devices required careful growth of the nanowire arrays and painstaking assembly.
In the latest paper, Wang and his group members Youfan Hu, Yan Zhang, Chen Xu, Guang Zhu and Zetang Li reported on much simpler fabrication techniques. First, they grew arrays of a new type of nanowire that has a conical shape. These wires were cut from their growth substrate and placed into an alcohol solution.
The solution containing the nanowires was then dripped onto a thin metal electrode and a sheet of flexible polymer film. After the alcohol was allowed to dry, another layer was created. Multiple nanowire/polymer layers were built up into a kind of composite, using a process that Wang believes could be scaled up to industrial production.
When flexed, these nanowire sandwiches – which are about two centimeters by 1.5 centimeters – generated enough power to drive a commercial display borrowed from a pocket calculator.
Wang says the nanogenerators are now close to producing enough current for a self-powered system that might monitor the environment for a toxic gas, for instance, then broadcast a warning. The system would include capacitors able to store up the small charges until enough power was available to send out a burst of data.
While even the current nanogenerator output remains below the level required for such devices as iPods or cardiac pacemakers, Wang believes those levels will be reached within three to five years. The current nanogenerator, he notes, is nearly 100 times more powerful than what his group had developed just a year ago.
Writing in a separate paper published in October in the journal Nature Communications, group members Sheng Xu, Benjamin J. Hansen and Wang reported on a new technique for fabricating piezoelectric nanowires from lead zirconate titanate – also known as PZT. The material is already used industrially, but is difficult to grow because it requires temperatures of 650 degrees Celsius.
In the paper, Wang’s team reported the first chemical epitaxial growth of vertically-aligned single-crystal nanowire arrays of PZT on a variety of conductive and non-conductive substrates. They used a process known as hydrothermal decomposition, which took place at just 230 degrees Celsius.
With a rectifying circuit to convert alternating current to direct current, the researchers used the PZT nanogenerators to power a commercial laser diode, demonstrating an alternative materials system for Wang’s nanogenerator family. “This allows us the flexibility of choosing the best material and process for the given need, although the performance of PZT is not as good as zinc oxide for power generation,” he explained.
And in another paper published in Nano Letters, Wang and group members Guang Zhu, Rusen Yang and Sihong Wang reported on yet another advance boosting nanogenerator output. Their approach, called “scalable sweeping printing,” includes a two-step process of (1) transferring vertically-aligned zinc oxide nanowires to a polymer receiving substrate to form horizontal arrays and (2) applying parallel strip electrodes to connect all of the nanowires together.
Using a single layer of this structure, the researchers produced an open-circuit voltage of 2.03 volts and a peak output power density of approximately 11 milliwatts per cubic centimeter.
“From when we got started in 2005 until today, we have dramatically improved the output of our nanogenerators,” Wang noted. “We are within the range of what’s needed. If we can drive these small components, I believe we will be able to power small systems in the near future. In the next five years, I hope to see this move into application.”
Threshold Sea Surface Temperature for Hurricanes and Tropical Thunderstorms Is Rising
Scientists have long known that atmospheric convection in the form of hurricanes and tropical ocean thunderstorms tends to occur when sea surface temperature rises above a threshold. So how do rising ocean temperatures with global warming affect this threshold? If the threshold does not rise, it could mean more frequent hurricanes. A new study by researchers at the International Pacific Research Center (IPRC) of the University of Hawaiʻi at Mānoa shows this threshold sea surface temperature for convection is rising under global warming at the same rate as that of the tropical oceans. Their paper appears in the Advance Online Publications of Nature Geoscience.
In order to detect the annual changes in the threshold sea surface temperature (SST) for convection, Nat Johnson, a postdoctoral fellow at IPRC, and Shang-Ping Xie, a professor of meteorology at IPRC and UH Mānoa, analyzed satellite estimates of tropical ocean rainfall spanning 30 years. They find that changes in the threshold temperature for convection closely follow the changes in average tropical sea surface temperature, which have both been rising approximately 0.1°C per decade.
“The correspondence between the two time series is rather remarkable,” says lead author Johnson. “The convective threshold and average sea surface temperatures are so closely linked because of their relation with temperatures in the atmosphere extending several miles above the surface.”
The change in tropical upper atmospheric temperatures has been a controversial topic in recent years because of discrepancies between reported temperature trends from instruments and the expected trends under global warming according to global climate models. The measurements from instruments have shown less warming than expected in the upper atmosphere. The findings of Johnson and Xie, however, provide strong support that the tropical atmosphere is warming at a rate that is consistent with climate model simulations.
“This study is an exciting example of how applying our knowledge of physical processes in the tropical atmosphere can give us important information when direct measurements may have failed us,” Johnson notes.
The study notes further that global climate models project that the sea surface temperature threshold for convection will continue to rise in tandem with the tropical average sea surface temperature. If true, hurricanes and other forms of tropical convection will require warmer ocean surfaces for initiation over the next century.
Sabtu, 23 Oktober 2010
Popular Mechanics Breakthrough Awardees Announced
Artificial retina technology, seismic fuses and cell phone microscopes among the winners
View discussions on the artificial retina, future of the automobile, and development of award winning technologies.
Popular Mechanics has recognized three NSF-funded projects with innovation Breakthrough Awards: an artificial retina returning sight to those who have lost it; a system that uses "controlled rocking" and energy-dissipating fuses to help buildings withstand earthquakes; and an inexpensive medical microscope built for cell-phones that allows doctors in rural villages to identify malaria-infected blood cells.
Those projects, along with 16 others, are featured in the November 2010 issue of Popular Mechanics. The awardees share the issue with the 2010 Leadership Award winner, J. Craig Venter, who is recognized for his breakthroughs in genomics over the last decade.
The artificial retina technology, funded for decades by several NSF biotechnology and transformational research programs, is an experimental system that helps individuals suffering from either macular degeneration or retinitis pigmentosa. Led by University of Southern California engineers Mark Humayun and Wentai Liu, the collaborative team--involving academia, government and industry--has been testing the system with more than 25 individuals, enabling them to progress from total blindness to being able to see shapes and navigate their local surroundings.
The controlled-rocking frame, developed as part of NSF's George E. Brown, Jr. Network for Earthquake Engineering Simulation program, uses replaceable, structural fuses that sacrifice themselves when an earthquake strikes, preserving the buildings they protect. Developed by a team led by Gregory Deierlein of Stanford University and Jerome Hajjar of Northeastern University, the system's self-centering frames and fuses help prevent post-earthquake displacement and are designed for fast and easy repair following a major earthquake, ensuring that an affected building can be reoccupied quickly.
The cellular-phone microscope, also funded by NSF's biotechnology programs, uses no lenses, lowering bulk and cost. The device--developed by NSF CAREER awardee Aydogan Ozcan of the University of California, Los Angeles--focuses LED light onto a slide positioned directly over a cell phone's camera, and after interpretation by software, can differentiate details so clearly that malaria-infected blood cells stand out from healthy ones.
NSF award abstracts contain summaries of Mark Humayun's NSF-funded projects, and the technology is described in a Science Nation video segment. There are also abstracts related to Wentai Liu's NSF-funded projects as well as of NSF NEES projects, and of Aydogan Ozcan's NSF-funded projects.
|
Popular Mechanics has recognized three NSF-funded projects with innovation Breakthrough Awards: an artificial retina returning sight to those who have lost it; a system that uses "controlled rocking" and energy-dissipating fuses to help buildings withstand earthquakes; and an inexpensive medical microscope built for cell-phones that allows doctors in rural villages to identify malaria-infected blood cells.
Those projects, along with 16 others, are featured in the November 2010 issue of Popular Mechanics. The awardees share the issue with the 2010 Leadership Award winner, J. Craig Venter, who is recognized for his breakthroughs in genomics over the last decade.
The artificial retina technology, funded for decades by several NSF biotechnology and transformational research programs, is an experimental system that helps individuals suffering from either macular degeneration or retinitis pigmentosa. Led by University of Southern California engineers Mark Humayun and Wentai Liu, the collaborative team--involving academia, government and industry--has been testing the system with more than 25 individuals, enabling them to progress from total blindness to being able to see shapes and navigate their local surroundings.
The controlled-rocking frame, developed as part of NSF's George E. Brown, Jr. Network for Earthquake Engineering Simulation program, uses replaceable, structural fuses that sacrifice themselves when an earthquake strikes, preserving the buildings they protect. Developed by a team led by Gregory Deierlein of Stanford University and Jerome Hajjar of Northeastern University, the system's self-centering frames and fuses help prevent post-earthquake displacement and are designed for fast and easy repair following a major earthquake, ensuring that an affected building can be reoccupied quickly.
The cellular-phone microscope, also funded by NSF's biotechnology programs, uses no lenses, lowering bulk and cost. The device--developed by NSF CAREER awardee Aydogan Ozcan of the University of California, Los Angeles--focuses LED light onto a slide positioned directly over a cell phone's camera, and after interpretation by software, can differentiate details so clearly that malaria-infected blood cells stand out from healthy ones.
NSF award abstracts contain summaries of Mark Humayun's NSF-funded projects, and the technology is described in a Science Nation video segment. There are also abstracts related to Wentai Liu's NSF-funded projects as well as of NSF NEES projects, and of Aydogan Ozcan's NSF-funded projects.
Langganan:
Postingan (Atom)