Tuesday, June 8, 2010

Recycling Without Sorting

Recycling Without Sorting
Engineers Create Recycling Plant That Removes The Need To Sort

---------------------------------------------------------------------------------
Engineers use the term single-stream recycling for their plant that takes the sorting out of the public’s hands. Trucks dump an unsorted mess of paper, plastic, and metal onto a conveyor belt. Magnets, air blowers, and optical scanners separate the items, making it possible to recycle the different products.

Recycling programs have been underway for years, but Americans still lag behind on recycling efforts. The biggest reason -- it's inconvenient.

If you recycle, you know the drill ... separate ... separate ... separate ...

"In the early years, we've had to separate things fairly significantly," recycler Steve Snowden says.

Now, Snowden's separating days are over. A new program called "Single Stream Recycling" allows you to put all recycle items into one container.

"We like it quite a bit because it is so easy," Snowden says.

Leaving the rest of the work up to someone else!

"We do the separation to mechanically separate the materials here at the recycling facility," says Michael Taylor, environmental scientist from Waste Management Recycle America, who developed the system.

Fast, rotating devices separate newspaper and cardboard from cans and glass that tumble to another level. Magnets grab metal cans and optical scanners recognize plastic from other items and trigger blasts of air to blow plastic into another bin.

"Highly-engineered, highly complex mechanical systems do the work in a much more efficient, much more cost effective and much more significantly faster-paced environment," Taylor explains.

Environmental scientists have seen an increase in recycling of almost 30-percent among homeowners who use the system.

"We're much more liable to do something the easier it is to do it," Snowden says.

There are 27 Waste Management Recycle America "Single Stream Recycling" facilities in the country. There are also other recycling organizations that use Single Stream.

The Materials Research Society and the Optical Society of America contributed to the information contained in the TV portion of this report.

PROS AND CONS: If residents don't have to maintain separate containers for their glass, bottles, paper and plastic supporters of the plant say that this encourages more people to participate in recycling. Residents can simply load all recyclables into a single container to be sorted at the plant. It also reduces costs for local governments, because less expensive trucks can be used if the waste material isn't sorted beforehand. Trucks cost $50,000 each more equipment to keep paper and other materials separate, for example. Critics say such a single-stream plant is inefficient and diminishes the usefulness of the materials collected, because it opts for speed to process the vast quantities of mixed recyclable waste it receives. There is more contamination as a result, which degrades the quality of what is sorted.

HOW IT WORKS: The plant uses a variety of sorting devices, including screens, magnets and ultraviolet optical scanners that trigger blasts of air to separate plastic bottles from the rest of the items, as well as spinning, star-shaped plastic devices that separate newspaper from cans and bottles by pushing the paper higher up an inclined screen so the heavier, smaller cans and bottles tumble down to a lower level. Glass is sorted by color and crushed, while plastic is shredded into small chips.

RECYCLING TIPS:

* Recycle all paper (junk mail, boxes, magazines, envelopes), bottles and cans (aluminum, glass, metal, and plastic).

* Buy products with little or no packaging, and buy the largest size you can use.

* Buy reusable products such as non-disposable cameras, electric razors, reusable lunch boxes, etc.

* Bring your own mug to the office or local coffee house for coffee; paper cups waste both money and landfill space.

* Buy products made with recycled materials.

* Reduce your junk mail by canceling unwanted catalogs.

* Bring your own reusable grocery sacks when shopping at the local supermarket.

Revolutionary New Desalination Membrane


Salt stacks at a desalination plant in Trapani, Sicily (Italy). (Credit: iStockphoto/Beat Bieler)

---------------------------------------------------

Hold the Salt: Engineers Develop Revolutionary New Desalination Membrane.

-----------------------------------------------------------------------------------------------------------
Researchers from the UCLA Henry Samueli School of Engineering and Applied Science have unveiled a new class of reverse-osmosis membranes for desalination that resist the clogging which typically occurs when seawater, brackish water and waste water are purified.

The highly permeable, surface-structured membrane can easily be incorporated into today's commercial production system, the researchers say, and could help to significantly reduce desalination operating costs. Their findings appear in the current issue of the Journal of Materials Chemistry.

Reverse-osmosis (RO) desalination uses high pressure to force polluted water through the pores of a membrane. While water molecules pass through the pores, mineral salt ions, bacteria and other impurities cannot. Over time, these particles build up on the membrane's surface, leading to clogging and membrane damage. This scaling and fouling places higher energy demands on the pumping system and necessitates costly cleanup and membrane replacement.

The new UCLA membrane's novel surface topography and chemistry allow it to avoid such drawbacks.

"Besides possessing high water permeability, the new membrane also shows high rejection characteristics and long-term stability," said Nancy H. Lin, a UCLA Engineering senior researcher and the study's lead author. "Structuring the membrane surface does not require a long reaction time, high reaction temperature or the use of a vacuum chamber. The anti-scaling property, which can increase membrane life and decrease operational costs, is superior to existing commercial membranes."

The new membrane was synthesized through a three-step process. First, researchers synthesized a polyamide thin-film composite membrane using conventional interfacial polymerization. Next, they activated the polyamide surface with atmospheric pressure plasma to create active sites on the surface. Finally, these active sites were used to initiate a graft polymerization reaction with a monomer solution to create a polymer "brush layer" on the polyamide surface. This graft polymerization is carried out for a specific period of time at a specific temperature in order to control the brush layer thickness and topography.

"In the early years, surface plasma treatment could only be accomplished in a vacuum chamber," said Yoram Cohen, UCLA professor of chemical and biomolecular engineering and a corresponding author of the study. "It wasn't practical for large-scale commercialization because thousands of meters of membranes could not be synthesized in a vacuum chamber. It's too costly. But now, with the advent of atmospheric pressure plasma, we don't even need to initiate the reaction chemically. It's as simple as brushing the surface with plasma, and it can be done for almost any surface."

In this new membrane, the polymer chains of the tethered brush layer are in constant motion. The chains are chemically anchored to the surface and are thus more thermally stable, relative to physically coated polymer films. Water flow also adds to the brush layer's movement, making it extremely difficult for bacteria and other colloidal matter to anchor to the surface of the membrane.

"If you've ever snorkeled, you'll know that sea kelp move back and forth with the current or water flow," Cohen said. "So imagine that you have this varied structure with continuous movement. Protein or bacteria need to be able to anchor to multiple spots on the membrane to attach themselves to the surface -- a task which is extremely difficult to attain due to the constant motion of the brush layer. The polymer chains protect and screen the membrane surface underneath."

Another factor in preventing adhesion is the surface charge of the membrane. Cohen's team is able to choose the chemistry of the brush layer to impart the desired surface charge, enabling the membrane to repel molecules of an opposite charge.

The team's next step is to expand the membrane synthesis into a much larger, continuous process and to optimize the new membrane's performance for different water sources.

"We want to be able to narrow down and create a membrane selection system for different water sources that have different fouling tendencies," Lin said. "With such knowledge, one can optimize the membrane surface properties with different polymer brush layers to delay or prevent the onset of membrane fouling and scaling.

"The cost of desalination will therefore decrease when we reduce the cost of chemicals [used for membrane cleaning], as well as process operation [for membrane replacement]. Desalination can become more economical and used as a viable alternate water resource."

Cohen's team, in collaboration with the UCLA Water Technology Research (WaTeR) Center, is currently carrying out specific studies to test the performance of the new membrane's fouling properties under field conditions.

"We work directly with industry and water agencies on everything that we're doing here in water technology," Cohen said. "The reason for this is simple: If we are to accelerate the transfer of knowledge technology from the university to the real world, where those solutions are needed, we have to make sure we address the real issues. This also provides our students with a tremendous opportunity to work with industry, government and local agencies."

A paper providing a preliminary introduction to the new membrane also appeared in the Journal of Membrane Science last month.

The original article was written by Wileen Wong Kromhout.


Journal References:

  1. Nancy H. Lin, Myung-man Kim, Gregory T. Lewis, Yoram Cohen. Polymer surface nano-structuring of reverse osmosis membranes for fouling resistance and improved flux performance. Journal of Materials Chemistry, 2010; DOI: 10.1039/b926918e
  2. Myung-man Kim, Nancy H. Lin, Gregory T. Lewis, Yoram Cohen. Surface nano-structuring of reverse osmosis membranes via atmospheric pressure plasma-induced graft polymerization for reduction of mineral scaling propensity. Journal of Membrane Science, 2010; DOI: 10.1016/j.memsci.2010.02.053

Life on Titan?

Life on Titan? New Clues to What's Consuming Hydrogen, Acetylene on Saturn's Moon.

----------------------------------------------------------------------------------------------------------------- Two new papers based on data from NASA's Cassini spacecraft scrutinize the complex chemical activity on the surface of Saturn's moon Titan. While non-biological chemistry offers one possible explanation, some scientists believe these chemical signatures bolster the argument for a primitive, exotic form of life or precursor to life on Titan's surface. According to one theory put forth by astrobiologists, the signatures fulfill two important conditions necessary for a hypothesized "methane-based life."

One key finding comes from a paper online now in the journal Icarus that shows hydrogen molecules flowing down through Titan's atmosphere and disappearing at the surface. Another paper online now in the Journal of Geophysical Research maps hydrocarbons on the Titan surface and finds a lack of acetylene.

This lack of acetylene is important because that chemical would likely be the best energy source for a methane-based life on Titan, said Chris McKay, an astrobiologist at NASA Ames Research Center, Moffett Field, Calif., who proposed a set of conditions necessary for this kind of methane-based life on Titan in 2005. One interpretation of the acetylene data is that the hydrocarbon is being consumed as food. But McKay said the flow of hydrogen is even more critical because all of their proposed mechanisms involved the consumption of hydrogen.

"We suggested hydrogen consumption because it's the obvious gas for life to consume on Titan, similar to the way we consume oxygen on Earth," McKay said. "If these signs do turn out to be a sign of life, it would be doubly exciting because it would represent a second form of life independent from water-based life on Earth."

To date, methane-based life forms are only hypothetical. Scientists have not yet detected this form of life anywhere, though there are liquid-water-based microbes on Earth that thrive on methane or produce it as a waste product. On Titan, where temperatures are around 90 Kelvin (minus 290 degrees Fahrenheit), a methane-based organism would have to use a substance that is liquid as its medium for living processes, but not water itself. Water is frozen solid on Titan's surface and much too cold to support life as we know it.

The list of liquid candidates is very short: liquid methane and related molecules like ethane. While liquid water is widely regarded as necessary for life, there has been extensive speculation published in the scientific literature that this is not a strict requirement.

The new hydrogen findings are consistent with conditions that could produce an exotic, methane-based life form, but do not definitively prove its existence, said Darrell Strobel, a Cassini interdisciplinary scientist based at Johns Hopkins University in Baltimore, Md., who authored the paper on hydrogen.

Strobel, who studies the upper atmospheres of Saturn and Titan, analyzed data from Cassini's composite infrared spectrometer and ion and neutral mass spectrometer in his new paper. The paper describes densities of hydrogen in different parts of the atmosphere and the surface. Previous models had predicted that hydrogen molecules, a byproduct of ultraviolet sunlight breaking apart acetylene and methane molecules in the upper atmosphere, should be distributed fairly evenly throughout the atmospheric layers.

Strobel found a disparity in the hydrogen densities that lead to a flow down to the surface at a rate of about 10,000 trillion trillion hydrogen molecules per second. This is about the same rate at which the molecules escape out of the upper atmosphere.

"It's as if you have a hose and you're squirting hydrogen onto the ground, but it's disappearing," Strobel said. "I didn't expect this result, because molecular hydrogen is extremely chemically inert in the atmosphere, very light and buoyant. It should 'float' to the top of the atmosphere and escape."

Strobel said it is not likely that hydrogen is being stored in a cave or underground space on Titan. The Titan surface is also so cold that a chemical process that involved a catalyst would be needed to convert hydrogen molecules and acetylene back to methane, even though overall there would be a net release of energy. The energy barrier could be overcome if there were an unknown mineral acting as the catalyst on Titan's surface.

The hydrocarbon mapping research, led by Roger Clark, a Cassini team scientist based at the U.S. Geological Survey in Denver, examines data from Cassini's visual and infrared mapping spectrometer. Scientists had expected the sun's interactions with chemicals in the atmosphere to produce acetylene that falls down to coat the Titan surface. But Cassini detected no acetylene on the surface.

In addition Cassini's spectrometer detected an absence of water ice on the Titan surface, but loads of benzene and another material, which appears to be an organic compound that scientists have not yet been able to identify. The findings lead scientists to believe that the organic compounds are shellacking over the water ice that makes up Titan's bedrock with a film of hydrocarbons at least a few millimeters to centimeters thick, but possibly much deeper in some places. The ice remains covered up even as liquid methane and ethane flow all over Titan's surface and fill up lakes and seas much as liquid water does on Earth.

"Titan's atmospheric chemistry is cranking out organic compounds that rain down on the surface so fast that even as streams of liquid methane and ethane at the surface wash the organics off, the ice gets quickly covered again," Clark said. "All that implies Titan is a dynamic place where organic chemistry is happening now."

The absence of detectable acetylene on the Titan surface can very well have a non-biological explanation, said Mark Allen, principal investigator with the NASA Astrobiology Institute Titan team. Allen is based at NASA's Jet Propulsion Laboratory in Pasadena, Calif. Allen said one possibility is that sunlight or cosmic rays are transforming the acetylene in icy aerosols in the atmosphere into more complex molecules that would fall to the ground with no acetylene signature.

"Scientific conservatism suggests that a biological explanation should be the last choice after all non-biological explanations are addressed," Allen said. "We have a lot of work to do to rule out possible non-biological explanations. It is more likely that a chemical process, without biology, can explain these results -- for example, reactions involving mineral catalysts."

"These new results are surprising and exciting," said Linda Spilker, Cassini project scientist at JPL. "Cassini has many more flybys of Titan that might help us sort out just what is happening at the surface."

The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. JPL, a division of the California Institute of Technology, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter was designed, developed and assembled at JPL.

For more information about the Cassini-Huygens mission visit http://www.nasa.gov/cassini and http://saturn.jpl.nasa.gov



Journal References:

  1. Darrell F. Strobel. Molecular hydrogen in Titan's atmosphere: Implications of the measured tropospheric and thermospheric mole fractions. Icarus, 2010; DOI: 10.1016/j.icarus.2010.03.003
  2. Clark, R. N., J. M. Curchin, J. W. Barnes, R. Jaumann, L. Soderblom, D. P. Cruikshank, R. H. Brown, S. Rodriguez, J. Lunine, K. Stephan, T. M. Hoefen, S. Le Mouelic, C. Sotin, K. H. Baines, B. J. Buratti, and P. D. Nicholson. Detection and Mapping of Hydrocarbon Deposits on Titan. Journal of Geophysical Research, 2010; (in press) DOI: 10.1029/2009JE003369


NASA Rover Finds Clue to Mars' Past and Environment for Life


Lengthy detective work with data NASA's Mars Exploration Rover Spirit collected in late 2005 has confirmed that an outcrop called "Comanche" contains a mineral indicating that a past environment was wet and non-acidic, possibly favorable to life. (Credit: NASA/JPL-Caltech/Cornell University)
------------------------------------------------------------------------------------------------------------------
Rocks examined by NASA's Spirit Mars Rover hold evidence of a wet, non-acidic ancient environment that may have been favorable for life. Confirming this mineral clue took four years of analysis by several scientists.

An outcrop that Spirit examined in late 2005 revealed high concentrations of carbonate, which originates in wet, near-neutral conditions, but dissolves in acid. The ancient water indicated by this find was not acidic.

NASA's rovers have found other evidence of formerly wet Martian environments. However the data for those environments indicate conditions that may have been acidic. In other cases, the conditions were definitely acidic, and therefore less favorable as habitats for life.

Laboratory tests helped confirm the carbonate identification. The findings were published June 3 by the journal Science.

"This is one of the most significant findings by the rovers," said Steve Squyres of Cornell University in Ithaca, N.Y. Squyres is principal investigator for the Mars twin rovers, Spirit and Opportunity, and a co-author of the new report. "A substantial carbonate deposit in a Mars outcrop tells us that conditions that could have been quite favorable for life were present at one time in that place. "

Spirit inspected rock outcrops, including one scientists called Comanche, along the rover's route from the top of Husband Hill to the vicinity of the Home Plate plateau which Spirit has studied since 2006. Magnesium iron carbonate makes up about one-fourth of the measured volume in Comanche. That is a tenfold higher concentration than any previously identified for carbonate in a Martian rock.

"We used detective work combining results from three spectrometers to lock this down," said Dick Morris, lead author of the report and a member of a rover science team at NASA's Johnson Space Center in Houston."The instruments gave us multiple, interlocking ways of confirming the magnesium iron carbonate, with a good handle on how much there is."

Massive carbonate deposits on Mars have been sought for years without much success. Numerous channels apparently carved by flows of liquid water on ancient Mars suggest the planet was formerly warmer, thanks to greenhouse warming from a thicker atmosphere than exists now. The ancient, dense Martian atmosphere was probably rich in carbon dioxide, because that gas makes up nearly all the modern, very thin atmosphere.

It is important to determine where most of the carbon dioxide went. Some theorize it departed to space. Others hypothesize that it left the atmosphere by the mixing of carbon dioxide with water under conditions that led to forming carbonate minerals. That possibility, plus finding small amounts of carbonate in meteorites that originated from Mars, led to expectations in the 1990s that carbonate would be abundant on Mars. However, mineral-mapping spectrometers on orbiters since then have found evidence of localized carbonate deposits in only one area, plus small amounts distributed globally in Martian dust.

Morris suspected iron-bearing carbonate at Comanche years ago from inspection of the rock with Spirit's Moessbauerpectrometer, which provides information about iron-containing minerals. Confirming evidence from other instruments emerged slowly. The instrument with the best capability for detecting carbonates, the Miniature Thermal Emission Spectrometer, had its mirror contaminated with dust earlier in 2005, during a wind event that also cleaned Spirit's solar panels.

"It was like looking through dirty glasses," said Steve Ruff of Arizona State University in Tempe, Ariz., another co-author of the report. "We could tell there was something very different about Comanche compared with other outcrops we had seen, but we couldn't tell what it was until we developed a correction method to account for the dust on the mirror."

Spirit's Alpha Particle X-ray Spectrometer instrument detected a high concentration of light elements, a group including carbon and oxygen, that helped quantify the carbonate content.

The rovers landed on Mars in January 2004 for missions originally planned to last three months. Spirit has been out of communication since March 22 and is in a low-power hibernation status during Martian winter. Opportunity is making steady progress toward a large crater, Endeavour, which is about seven miles away.

NASA's Jet Propulsion Laboratory, Pasadena, manages the Mars Exploration Rovers for the agency's Science Mission Directorate in Washington. For more information about the rovers, visit: http://www.nasa.gov/

------------------------------------------------------------------------------------------------------------------

Journal Reference:

  1. R. V. Morris, S. W. Ruff, R. Gellert, D. W. Ming, R. E. Arvidson, B. C. Clark, D. C. Golden, K. Siebach, G. Klingelhofer, C. Schroder, I. Fleischer, A. S. Yen, S. W. Squyres. Identification of Carbonate-Rich Outcrops on Mars by the Spirit Rover. Science, 2010; DOI: 10.1126/science.1189667

Earth and Moon Formed Later Than Previously Thought?

Earth and Moon Formed Later Than Previously Thought, New Research Suggests

Astronomers have theorized that the planet Earth and the Moon were created as the result of a giant collision between two planets the size of Mars and Venus. Until now, the collision was thought to have happened when the solar system was 30 million years old, or approximately 4,537 million years ago. But new research shows that Earth and the Moon must have formed much later -- perhaps up to 150 million years after the formation of the solar system.

The research results have been published in the scientific journal Earth and Planetary Science Letters.

"We have determined the ages of the Earth and the Moon using tungsten isotopes, which can reveal whether the iron cores and their stone surfaces have been mixed together during the collision," explains Tais W. Dahl, who did the research as his thesis project in geophysics at the Niels Bohr Institute at the University of Copenhagen in collaboration with professor David J. Stevenson from the California Institute of Technology (Caltech).

Turbulent collisions

The planets in the solar system are thought to have been created by collisions between small dwarf planets orbiting the newborn Sun. In the collisions, the small planets melted together and formed larger and larger planets. Earth and the Moon are believed to be the result of a gigantic collision between two planets the size of Mars and Venus. The two planets collided at a time when both had a core of metal (iron) and a surrounding mantle of silicates (rock). But when did it happen and how did it happen? The collision took place in less than 24 hours and the temperature of the Earth was so high (7000º C), that both rock and metal must have melted in the turbulent collision. But were the stone mass and iron mass also mixed together?

Until recently it was believed that the rock and iron mixed completely during the planet formation and so the conclusion was that the Moon was formed when the solar system was 30 million years old or approximately 4,537 million years ago. But new research shows something completely different.

Dating with radioactive elements

The age of Earth and the Moon can be dated by examining the presence of certain elements in Earth's mantle. Hafnium-182 is a radioactive substance, which decays and is converted into the isotope tungsten-182. The two elements have markedly different chemical properties and while the tungsten isotopes prefer to bond with metal, hafnium prefers to bond to silicates, i.e. rock.

It takes 50-60 million years for all hafnium to decay and be converted into tungsten, and during the Moon forming collision nearly all the metal sank into Earth's core. But did all the tungsten go into the core?

"We have studied to what degree metal and rock mix together during the planet forming collisions. Using dynamic model calculations of the turbulent mixing of the liquid rock and iron masses we have found that tungsten isotopes from the Earth's early formation remain in the rocky mantle," explains Dahl.

The new studies imply that the moon forming collision occurred after all of the hafnium had decayed completely into tungsten.

"Our results show that metal core and rock are unable to emulsify in these collisions between planets that are greater than 10 kilometres in diameter and therefore that most of the Earth's iron core (80-99 %) did not remove tungsten from the rocky material in the mantle during formation," explains Dahl.

The result of the research means that Earth and the Moon must have been formed much later than previously thought -- that is to say not 30 million years after the formation of the solar system 4,567 million years ago but perhaps up to 150 million years after the formation of the solar system.



Journal Reference:

  1. Tais W. Dahl, David J. Stevenson. Turbulent mixing of metal and silicate during planet accretion -- And interpretation of the Hf-W chronometer. Earth and Planetary Science Letters, 2010; 295 (1-2): 177 DOI: 10.1016/j.epsl.2010.03.038


Monday, June 7, 2010

Early Earth Haze Likely Provided Ultraviolet Shield for Planet



A new study by CU-Boulder researchers indicates a thick organic haze shrouding Earth several billion years ago was similar to the one now hovering over Saturn's largest moon, Titan (above) and may have protected primordial life on the planet from damaging ultraviolet radiation. (Credit: NASA/JPL/Space Science Institute.)
------------------------------------------------------------------------------------------------------------------
A new study shows a thick organic haze that enshrouded early Earth several billion years ago may have been similar to the haze now hovering above Saturn's largest moon, Titan, and would have protected primordial life on the planet from the damaging effects of ultraviolet radiation.

The University of Colorado at Boulder scientists believe the haze was made up primarily of methane and nitrogen chemical byproducts created by reactions with light, said CU-Boulder doctoral student Eric Wolf, lead study author. Not only would the haze have shielded early Earth from UV light, it would have allowed gases like ammonia to build up, causing greenhouse warming and perhaps helped to prevent the planet from freezing over.

The researchers determined the haze of hydrocarbon aerosols was probably made up of fluffy, microscopic particles shaped somewhat like cottonwood tree seeds that would have blocked UV but allowed visible light through to Earth's surface, Wolf said.

Prior to the new study, the prevailing scientific view was that the atmosphere of Earth some 3 billion years ago was primarily made up of nitrogen gas with lesser amounts of carbon dioxide, methane, hydrogen and water vapor, said Wolf. "Since climate models show early Earth could not have been warmed by atmospheric carbon dioxide alone because of its low levels, other greenhouse gases must have been involved. We think the most logical explanation is methane, which may have been pumped into the atmosphere by early life that was metabolizing it."

A paper on the subject by Wolf and CU-Boulder Professor Brian Toon of the atmospheric and oceanic sciences department is being published in the June 4 issue of Science.

The output of the sun during the Archean period some 3.8 billion to 2.5 billion years ago is thought to have been 20 percent to 30 percent fainter than today, said Wolf. But previous work by other scientists produced geological and biological evidence that indicates Earth's surface temperatures were as warm or warmer than today.

As part of the early Earth study, Wolf and Toon used a climate model from the National Center for Atmospheric Research and concepts from lab studies by another CU group led by chemistry and biochemistry Professor Margaret Tolbert that help explain the odd haze of Titan, the second largest moon in the solar system and the largest moon of Saturn. Titan came under intense study following the arrival of the Cassini spacecraft at Saturn in 2004, allowing scientists to determine it was the only moon in the solar system with both a dense atmosphere and liquid on its surface.

Previous modeling efforts of early Earth haze by other scientists assumed that aerosol particulates making up the haze were spherical, said Wolf. But the spherical shape does not adequately account for the optical properties of the haze that blanketed the planet.

Lab simulations helped researchers conclude that the Earth haze likely was made up of irregular "chains" of aggregate particles with greater geometrical sizes than spheres, similar to the shape of aerosols believed to populate Titan's thick atmosphere. Wolf said the aggregate aerosol particulates are believed to be fragmented geometric shapes known as fractals that can be split into parts.

During the Archean period there was no ozone layer in Earth's atmosphere to protect life on the planet, said Wolf. "The UV shielding methane haze over early Earth we are suggesting not only would have protected Earth's surface, it would have protected the atmospheric gases below it -- including the powerful greenhouse gas, ammonia -- that would have played a significant role in keeping the early Earth warm."

CU-Boulder researchers estimated there were roughly 100 million tons of haze produced annually in the atmosphere of early Earth during the Archean. "If this was the case, an early Earth atmosphere literally would have been dripping organic material into the oceans, providing manna from heaven for the earliest life to sustain itself," Toon said.

"Methane is the key to make this climate model run, so one of our goals now is to pin down where and how it originated," said Toon. If Earth's earliest organisms didn't produce the methane, it may have been generated by the release of gasses during volcanic eruptions either before or after life first arose -- a hypothesis that will requires further study, he said.

The new CU-Boulder study will likely re-ignite interest in a controversial experiment by scientists Stanley Miller and Harold Urey in the 1950s in which methane, ammonia, nitrogen and water were combined in a test tube. After Miller and Urey ran an electrical current through the mixture to simulate the effects of lightning or powerful UV radiation, the result was the creation of a small pool of amino acids -- the building blocks of life.

Toon said the theory of early Earth being shrouded by a gaseous blanket containing methane and ammonia first arose in the 1960s and was subsequently discarded by scientists. In the 1970s and 1980s some scientists suggested the early Earth atmosphere was similar to those on Mars and Venus with lots of carbon dioxide, another theory that eventually went by the wayside. Since CO2-rich atmospheres do not produce organic molecules easily, scientists began looking in deep-sea volcanic vents and at wayward asteroids to explain early Earth life.

A 1997 paper by the late Carl Sagan of Cornell University and Christopher Chyba, then at the University of Arizona, proposed that an organic aerosol shield in early Earth's atmosphere would have protected the ammonia wafting beneath it, allowing heating to occur at Earth's surface. But the authors proposed the haze particles were spherical rather than irregular aggregate particles Wolf and Toon suggest and did not consider methane to be the driver of the system, eventually sinking that theory.

"We still have a lot of research to do in order to refine our new view of early Earth," said Wolf. "But we think this paper solves a number of problems associated with the haze that existed over early Earth and likely played a role in triggering or at least supporting the earliest life on the planet."

From space, early Earth probably looked much like Titan looks today, said Toon. "It would have been shrouded by a reddish haze that would have been difficult to see through, and the ocean probably was a greenish color caused by dissolved iron in the oceans. It wasn't a blue planet by any means."

NASA's Planetary Atmosphere Program funded the study.



Journal Reference:

  1. E. T. Wolf and O. B. Toon. Fractal Organic Hazes Provided an Ultraviolet Shield for Early Earth. Science, 4 June 2010: Vol. 328. no. 5983, pp. 1266 - 1268 DOI: 10.1126/science.1183260

First Exploration of a Sub-Glacial Antarctic Lake


Subglacial Lakes, Antarctica. (Credit: NASA map by Robert Simmon, based on data from the Radarsat Antarctic Mapping Project, Ted Scambos, Chris Shuman, and Martin J. Siegert / Courtesy of NASA's Earth Observatory -- http://earthobservatory.nasa.gov)

------------------------------------------------------------------------------------------------------------------

Drilling Into the Unknown: First Exploration of a Sub-Glacial Antarctic Lake Is a Major Step Closer

Scientists have located the ideal drill site for the first ever exploration of an Antarctic sub-glacial lake, a development that is likely to facilitate a revolution in climate-change research and which may lead to the discovery of life-forms cut off from the main line of evolution for millions of years.

In a paper published in Geophysical Research Letters this week, scientists from Northumbria University, the University of Edinburgh and the British Antarctic Survey have revealed the optimal drill site for exploring Lake Ellsworth -- a sub-glacial lake, comparable in size to England's Lake Windermere, that is covered by three kilometers of ice.

No one has yet drilled into an Antarctic sub-glacial lake. But microbiologists believe that such lakes could harbor uniquely adapted life-forms cut off from other lines of evolution. Paleoclimatologists also suggest that sediments on the lake floors could contain records of ice sheets and climate history that would revolutionize research into global warming.

In order to access the lake water and the undisturbed sediment containing the climate record, it is essential to drill in the right place.

The optimal drilling site has to avoid possible areas of in-coming water that would disturb the sediment, as well as areas of so-called basal freezing -- where lake water freezes to the underside of the ice. It also has to avoid any concentrations of trapped gases which could rush up the bore hole to cause a potentially dangerous blowout at the surface.

The Scientific Committee on Arctic Research identified Lake Ellsworth as an excellent candidate for the first drill site.

Dr John Woodward, from Northumbria University's School of Applied Sciences, commented: "The location provides a deep water column for sampling and reduces the risk from possible basal-freezing mechanisms. It optimizes the chances of recovering an undisturbed, continuous sedimentary sequence from the lake floor, and minimizes the potential for trapped gases to gain entry to the borehole."

Dr Andy Smith of the British Antarctic Survey added: "This is an eagerly anticipated result -- the final piece of the jigsaw that we need to plan the exploration of Lake Ellsworth. That exploration can now go ahead at full speed."

To locate the optimal drill site, the team had to conduct the first detailed characterization of the physiography of a sub-glacial lake. Between 2007-2009, the lake was subject to a ground-based geophysics campaign involving an ice-penetrating radar to investigate ice thickness, seismic surveys to calculate lake water depths and flow measurements to calculate how the ice sheet flows over the underlying lake.

The climactic stage in the project will take place in the 2012-13 Antarctic summer when the Lake Ellsworth Consortium will use the data in this paper to access a sub-glacial lake for the first time.

Professor Martin Siegert, of the University of Edinburgh's School of GeoSciences, said: "Pinpointing the perfect spot from which to access the sub-glacial lake helps us to find out all we can about this interesting and pristine environment, without the risk of contaminating it."

------------------------------------------------------------------------------------------------------------------

Journal Reference:

  1. J. Woodward, A. M. Smith, N. Ross, M. Thoma, H. F. J. Corr, E. C. King, M. A. King, K. Grosfeld, M. Tranter, M. J. Siegert. Location for direct access to subglacial Lake Ellsworth: An assessment of geophysical data and modeling. Geophysical Research Letters, 2010; 37 (11): L11501 DOI: 10.1029/2010GL042884


Could Life Survive on Mars? Yes, Expert Says

Researchers at McGill's department of natural resources, the National Research Council of Canada, the University of Toronto and the SETI Institute have discovered that methane-eating bacteria survive in a highly unique spring located on Axel Heiberg Island in Canada's extreme North. Dr. Lyle Whyte, McGill University microbiologist explains that the Lost Hammer spring supports microbial life, that the spring is similar to possible past or present springs on Mars, and that therefore they too could support life.

The subzero water is so salty that it doesn't freeze despite the cold, and it has no consumable oxygen in it. There are, however, big bubbles of methane that come to the surface, which had provoked the researchers' curiosity as to whether the gas was being produced geologically or biologically and whether anything could survive in this extreme hypersaline subzero environment. "We were surprised that we did not find methanogenic bacteria that produce methane at Lost Hammer," Whyte said, "but we did find other very unique anaerobic organisms -- organisms that survive by essentially eating methane and probably breathing sulfate instead of oxygen."

It has been very recently discovered that there is methane and frozen water on Mars. Photos taken by the Mars Orbiter show the formation of new gullies, but no one knows what is forming them. One answer is that there could be that there are springs like Lost Hammer on Mars.

"The point of the research is that it doesn't matter where the methane is coming from," Whyte explained. "If you have a situation where you have very cold salty water, it could potentially support a microbial community, even in that extreme harsh environment." While Axel Heiberg is already an inhospitable place, the Lost Hammer spring is even more so. "There are places on Mars where the temperature reaches relatively warm -10 to 0 degrees and perhaps even above 0ºC," Whyte said, "and on Axel Heiberg it gets down to -50, easy. The Lost Hammer spring is the most extreme subzero and salty environment we've found. This site also provides a model of how a methane seep could form in a frozen world like Mars, providing a potential mechanism for the recently discovered Martian methane plumes."

The research was published in the International Society for Microbial Ecology Journal and received logistical support from McGill University's Arctic Research Station and the Canadian Polar Continental Shelf Project. Funding was received from NASA, the Natural Sciences and Engineering Research Council of Canada, and the Canadian Space Agency. Additional funding for student research was provided by the Department of Indian and Northern Affairs, and the Fonds Québécois de la Recherche sur la Nature et les Technologies.



Journal Reference:

  1. Thomas D Niederberger, Nancy N Perreault, Stephanie Tille, Barbara Sherwood Lollar, Georges Lacrampe-Couloume, Dale Andersen, Charles W Greer, Wayne Pollard, Lyle G Whyte. Microbial characterization of a subzero, hypersaline methane seep in the Canadian High Arctic. The ISME Journal, 2010; DOI: 10.1038/ismej.2010.57

Yangtze River’s Ancient Origins Revealed

The Yangtze River in China is 40 million years older than was previously thought, according to new research.

A study of minerals by a team led by Durham University reveals that the Yangtze River began to cut the Three Gorges area around 45 million years ago, making it much older than previously believed.

The Yangtze River, the third-longest river in the world, has played a central role in the development of Chinese culture, and the Three Gorges, which separate the Sichuan Basin in the west from the lowlands of central and eastern China to the east, have particular historical, cultural, and geomorphological significance.

Without the transport pathway created by the Three Gorges, south-western China -- including the rich agricultural area of Sichuan Province, known as China's 'rice bowl' -- would have remained cut off from the rest of the country by the otherwise inaccessible mountains that surround the region.

The new findings, published in Geology, show that sediments from the Three Gorges, previously analysed by researchers and dated as being only 1-2 million years old, must have been deposited long after the Three Gorges were cut.

The research team, led by Dr Alexander Densmore from the Institute of Hazard, Risk and Resilience, Durham University, determined the onset of incision in the Three Gorges by looking at the cooling of minerals in the granite that underlies the Three Gorges Dam at Sandouping in Hubei Province. The granite containing these apatite grains was cooled to lower temperatures as the river cut down through it.

Prior work on the origin of the Three Gorges has shown that the Yangtze River most likely began as a set of small, non-descript streams that drained both west and east, out of a range of low mountains in central China.

It was argued that the merger of these streams gave rise to the progressive development of a much larger, east-flowing river system that became the Yangtze River. Many scientists agreed that the most likely point of merger of the streams was in the Three Gorges area.

Dr Alex Densmore said: "The fact that erosion had removed all of the evidence of the old, pre-merger river courses made dating the river particularly difficult.

"Prior attempts to date the Three Gorges placed their age at only 1-2 million years but this was based on sediments found within the gorges. If this were the case, the river would have had to have been carved into the rocks very quickly, and this would have required extremely high incision rates.

"We used the number of damage trails in the mineral apatite to tell us when the rocks were cooled below a particular temperature and thus when gorge incision began."

The research team, which involved scientists from Durham, Chengdu and Victoria universities, and researchers in the UK and Germany, found that samples near the gorges showed that cooling began about 45 million years ago, whereas samples taken farther away from the river show no evidence of that cooling. Thus, the cooling must have been caused by gorge incision, rather than by more regional erosion, according to the scientists.

Dr Densmore added: "The Yangtze River is much older than previously thought and extremely high incision rates were not required to create the distinctive gorges. It formed slowly, over a much longer time-span."

The research, funded by the Swiss Federal Institute of Technology, also helped to explain a mysterious episode of erosion that affected the eastern part of the Tibetan Plateau. 45 million years ago, sediment shed from the rising Tibetan Plateau to the west was trapped in a large basin upstream of the future Three Gorges area.

Dr Densmore said: "As the Gorges were cut, they acted as like a plughole in a giant bathtub, allowing that sediment to be eroded and flushed down into the growing Yangtze River and out into the East China Sea, depositing the sediment in the lowland areas of eastern China."



Journal Reference:

  1. N. J. Richardson, A. L. Densmore, D. Seward, M. Wipf, L. Yong. Did incision of the Three Gorges begin in the Eocene? Geology, 2010; 38 (6): 551 DOI: 10.1130/G30527.1

Oil Spill Puts Commercially Significant Cold-Water Reefs in Peril


Thousands of barrels of oil are leaking out of the Deepwater Horizon site each day. The oil ascends from depths of approximately 1502 m. (4928 ft.), but not all of it reaches the sea surface. The stratified seawater of the Gulf of Mexico captures or slows the ascent of the oil, and the addition of dispersants near the oil source produces tiny droplets that float for a considerable time in the water column and may never reach the surface.

According to Drs. Gregor Eberli, Mark Grasmueck, and Ph.D. candidate Thiago Correa of the Marine Geology & Geophysics division of the University of Miami (UM), the oil that remains in suspension in the water column and creates plumes poses a serious risk for the planktonic and benthic (sea floor) life throughout the region, including the deep-sea reefs they study.

"The deep water communities within the Gulf of Mexico and in the Straits of Florida are well hidden from us, but they include many species of cold-water corals that live in water at depths of 600 -- 1500 m. (1969 -4921 ft.) in waters as cold as 3° Celsius (37.4°F)," said Eberli. "Unlike their more familiar shallow-water counterparts, these corals do not live in symbiosis with unicellular algae called zooxanthellae, but are animals that feed on organic matter floating through the water column. We know that most of the food consumed by the cold-water corals is produced in the surface waters and eventually sinks down to the corals."

The large plumes being created by the oil spill, some of which are reported to be several miles long, sit in the water column situated between this source of food and these deep-water corals. As organic material sinks through the water column it passes through the oil plumes and is contaminated by micron-sized oil droplets.

"It is most likely that the delicate cold-water corals are not able to digest these oil-laden food particles and will perish in large numbers," said Eberli. "We are especially concerned because the migrating oil plumes have the potential to destroy or greatly diminish these deep-sea coral communities as they are carried by the currents. These corals are important because they are the foundation of a diverse ecosystem that at last count includes over 1,300 marine species, according to Dr. Thomas Hourigan at NOAA."

There is also a danger that these plumes are carried by the Loop Current from the Gulf of Mexico to the Atlantic Ocean. Deep-sea coral ecosystems are common at numerous sites from the eastern Gulf of Mexico through the Straits of Florida and northward to the Blake Plateau off North Carolina. This distribution matches the path of the Loop Current that forms from the water masses in the Gulf of Mexico, and enters the Straits of Florida to form the Florida Current and further north the Gulf Stream.

Particularly vulnerable to disturbance are deep-sea fish that form part of this ecosystem because of their late maturation, extreme longevity, low fecundity and slow growth. Deep-water coral reefs in Florida waters are the habitat of the economically valuable grouper, snapper and amberjack. These and other species inhabit hundreds of deep-water coral reefs off the coast of Florida at depths of about 300 -915 m. (1000 to 3000 feet), which were explored by Dr. John Reed from Harbor Branch Oceanographic Institute some thirty years ago. This includes the 59,500 sq. m. (~23,000 sq. mi.) of deep-water reefs off the east coast of Florida, which is now proposed as the Oculina Habitat Area of Particular Concern.

There is no known technique to clean the water column from these oil plumes, and as a consequence the hidden oases of corals in the deep, cold waters of the Gulf of Mexico, the Straits of Florida and the Blake Plateau are in severe danger of being decimated by this oil spill.


The above story is reprinted ( from materials provided by Rosenstiel School of Marine & Atmospheric Science, University of Miami.

Thursday, June 3, 2010

Jumping Genes


Schematic drawing of a composite (or complex) transposon. It is composed of two insertion sequence, which codify genes for transposition, flanking structural genes which codify for various proteins or enzymes, i.e. for antibiotic or viral resistance. (Credit: Jacek FH / Courtesy of Wikipedia)

---------------------------------------------------------------------------------------------------------------------


Jumping Genes Provide Extensive 'Raw Material' for Evolution, Study Finds

Using high-throughput sequencing to map the locations of a common type of jumping gene within a person's entire genome, researchers at the University of Pennsylvania School of Medicine found extensive variation in these locations among the individuals they studied, further underscoring the role of these errant genes in maintaining genetic diversity.

The investigators determined that any two peoples' genomes differ at roughly 285 sites out of the 1139 sites studied. These results were found by scanning the genomes of 25 individuals, 15 of which were unrelated. They report their findings online in Genome Research.

Jumping genes -- also called transposons -- are sequences of DNA that move to different areas of the genome within the same cell.

"The significance of this work is that there is much more diversity in our genome due to insertions by this family of transposons than previously thought," said co-author Haig Kazazian, MD, Seymour Gray Professor of Molecular Medicine, in the Penn Department of Genetics. "This movement of genetic material provides the raw material of genetic evolution, and it doesn't take into account the insertions that we believe occur outside of the sperm and egg cells studied in this project."

Transposons are a source of diversity within a species' gene pool, with implications on many levels. For example, slight changes in genes help organisms adapt and survive in new environments, and populations with genetic diversity are less vulnerable to disease and problems with reproduction.

Insertions into certain spots in the genome can also cause cell function to go awry, so understanding their placement and variation in the human genome is important for a fundamental understanding of disease. Insertions can cause many genetic diseases, such as hemophilia and Duchenne muscular dystrophy, and may play a role in the development of cancer.

Retrotransposons are the major class of jumping genes, with the L1 family the most abundant type of retrotransposon in the human genome. L1s comprise about 17 percent of the human genome and were the subject of the Genome Research paper.

Eventually, continuous jumping by retrotransposons expands the size of the human genome and may cause shuffling of genetic content. For example, when retrotransposons jump, they may take portions of nearby gene sequences with them, inserting these where they land, and thereby allowing for the creation of new genes. Even otherwise unremarkable insertions of L1s may cause significant effects on nearby genes, such as lowering their expression.

Retrotransposons move by having their DNA sequence transcribed or copied to RNA, and then instead of the genetic code being translated directly into a protein sequence, the RNA is copied back to DNA by the retrotransposon's own enzyme called reverse transcriptase. This new DNA is then inserted back into the genome. The process of copying is similar to that of retroviruses, such as HIV, leading scientists to speculate that retroviruses were derived from retrotransposons.

The team also found that on average 1 in 140 individuals have obtained a new L1 insertion from their parents. When all retrotransposon insertions, including L1 and others, are considered about 1 in 40 individuals have received a new insertion from their parents.

The current study counted insertions in the heritable germ cell line, that is in egg and sperm cells. "The real elephant in the room is the question of the incidence of somatic insertions, insertions in cells that aren't eggs or sperm" says Kazazian. "We don't yet know the incidence of those somatic insertions."

Because the insertions detected in this study and others like it are present in some individuals and not others, there is the possibility of association with genetic disease. Future studies in the Kazazian lab funded by an ARRA stimulus grant through the National Institutes of Health will develop techniques to uncover such associations using these retrotransposon insertions as genetic markers.


Adam Ewing, a PhD candidate in the Kazazian lab is the paper's other co-author.

The work was funded by the National Institutes for General Medical Sciences.
Story Source:
The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Pennsylvania School of Medicine.

Journal Reference:
A. D. Ewing, H. H. Kazazian. High-throughput sequencing reveals extensive variation in human-specific L1 content in individual human genomes. Genome Research, 2010; DOI: 10.1101/gr.106419.110

Unique Eclipsing Binary Star System Discovered

Astrophysicists at UC Santa Barbara are the first scientists to identify two white dwarf stars in an eclipsing binary system, allowing for the first direct radius measurement of a rare white dwarf composed of pure helium. The results will be published in the Astrophysical Journal Letters. These observations are the first to confirm a theory about a certain type of white dwarf star.

The story began with observations by Justin Steinfadt, a UCSB physics graduate student who has been monitoring white dwarf stars as part of his Ph.D. thesis with Lars Bildsten, a professor and permanent member of UCSB's Kavli Institute for Theoretical Physics, and Steve Howell, an astronomer at the National Optical Astronomy Observatory (NOAO) in Tucson, Ariz.

Brief eclipses were discovered during observations of the star NLTT 11748 with the Faulkes Telescope North of the Las Cumbres Observatory Global Telescope (LCOGT), a UCSB-affiliated institution. NLTT 11748 is one of the few very low-mass, helium-core white dwarfs that are under careful study for their brightness variations. Rapid snapshots of the star -- about one exposure every minute -- found a few consecutive images where the star was slightly fainter. Steinfadt quickly realized the importance of this unexpected discovery. "We've been looking at a lot of stars, but I still think we got lucky!" he said.

Avi Shporer, a postdoctoral fellow at UCSB and LCOGT, assisted with the observations and quickly brought his expertise to the new discovery. "We knew something was unusual, especially as we confirmed these dips the next night," Shporer said. The scientists observed three-minute eclipses of the binary stars twice during the 5.6-hour orbit.

The excitement of the discovery and the need to confirm it rapidly led to the use of the 10-meter Keck Telescope, located on Mauna Kea in Hawaii, just five weeks after the first observation. The team also brought in David Kaplan, a Hubble Fellow and KITP postdoctoral fellow. Bildsten and Kaplan arranged for use of the Keck by swapping time they had reserved for another project with Geoff Marcy at UC Berkeley.

During that night, the scientists were able to measure the changing Doppler shift of the star NLTT 11748 as it orbited its faint, but more massive, white dwarf companion. "It was amazing to witness the velocity of this star change in just a few minutes," said Kaplan, who was present at the Keck telescope during the observations.

These observations led to the confirmation of an important theory about white dwarf stars. Stars end their lives in many ways. "The formation of such a binary system containing an extremely low mass helium white dwarf has to be the result of interactions and mass loss between the two original stars," said Howell. White dwarf stars are the very dense remnants of stars like the sun, with dimensions comparable to the earth. A star becomes a white dwarf when it has exhausted its nuclear fuel and all that remains is the dense inner core, typically made of carbon and oxygen.

One of the stars in the newly discovered binary is a relatively rare helium-core white dwarf with a mass only 10 to 20 percent of that of the sun. The existence of these special stars has been known for more than 20 years. Theoretical work predicted that these stars burn hotter and are larger than ordinary white dwarfs. Until now, their size had never been measured. The observations of the star NLTT 11748 by this research group have yielded the first direct radius measurement of an unusual white dwarf that confirms this theory.

The other star in the binary is also a white dwarf, albeit a more ordinary one, composed of mostly carbon and oxygen with about 70 percent of the mass of the sun. This star is more massive and also much smaller than the other white dwarf. The light it gives off is 30 times fainter than that of its partner star in the binary.

Bildsten credits the scientific collaborations at UCSB for the success of this work, noting that the original team was expanded to include KITP, the Physics Department, and LCOGT to quickly respond to the new discovery.

"A particularly intriguing possibility to ponder is what will happen in 6 to 10 billion years," said Bildsten. "This binary is emitting gravitational waves at a rate that will force the two white dwarfs to make contact. What happens then is anybody's guess."

The National Science Foundation, LCOGT, and NASA supported this work.

Story Source:
The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of California - Santa Barbara.

Journal Reference:
Justin D. R. Steinfadt, David L. Kaplan, Avi Shporer, Lars Bildsten, Steve B. Howell. Discovery of the Eclipsing Detached Double White Dwarf Binary NLTT 11748. The Astrophysical Journal, 2010; 716 (2): L146 DOI: 10.1088/2041-8205/716/2/L146

Copper Nanowires Enable Bendable Displays and Solar Cells


Tiny copper wires can be built in bulk and then "printed" on a surface to conduct current, transparently. (Credit: Benjamin Wiley, Duke Chemistry)

-------------------------------------------------------------------------------------------------------------------

Copper Nanowires Enable Bendable Displays and Solar Cells; Pin-Like Copper Structures Self-Assemble in Solution

A team of Duke University chemists has perfected a simple way to make tiny copper nanowires in quantity. The cheap conductors are small enough to be transparent, making them ideal for thin-film solar cells, flat-screen TVs and computers, and flexible displays.

"Imagine a foldable iPad," said Benjamin Wiley, an assistant professor of chemistry at Duke. His team reports its findings online in Advanced Materials.

Nanowires made of copper perform better than carbon nanotubes, and are much cheaper than silver nanowires, Wiley said.

The latest flat-panel TVs and computer screens produce images by an array of electronic pixels connected by a transparent conductive layer made from indium tin oxide (ITO). ITO is also used as a transparent electrode in thin-film solar cells.

But ITO has drawbacks: it is brittle, making it unsuitable for flexible screens; its production process is inefficient; and it is expensive and becoming more so because of increasing demand.

"If we are going to have these ubiquitous electronics and solar cells," Wiley said, "we need to use materials that are abundant in the earth's crust and don't take much energy to extract." He points out that there are very few materials that are known to be both transparent and conductive, which is why ITO is still being used despite its drawbacks.

However, Wiley's new work shows that copper, which is a thousand times more abundant than indium, can be used to make a film of nanowires that is both transparent and conductive.

Silver nanowires also perform well as a transparent conductor, and Wiley contributed to a patent on the production of them as a graduate student. But silver, like indium, is rare and expensive. Other researchers have been trying to improve the performance of carbon nanotubes as a transparent conductor, but without much luck.

"The fact that copper nanowires are cheaper and work better makes them a very promising material to solve this problem," Wiley said.

Wiley and his students, PhD candidate Aaron Rathmell and undergraduate Stephen Bergin, grew the copper nanowires in a water-based solution. "By adding different chemicals to the solution, you can control the assembly of atoms into different nanostructures," Wiley said. In this case, when the copper crystallizes, it first forms tiny "seeds," and then a single nanowire sprouts from each seed. It's a mechanism of crystal growth that has never been observed before.

Because the process is water-based, and because copper nanowires are flexible, Wiley thinks the nanowires could be coated from solution in a roll-to-roll process, like newspaper printing, which would be much more efficient than the ITO production process.

Other researchers have produced copper nanowires before, but on a much smaller scale.

Wiley's lab is also the first to demonstrate that copper nanowires perform well as a transparent conductor. He said the process will need to be scaled up for commercial use, and he's got a couple of other problems to solve as well: preventing the nanowires from clumping, which reduces transparency, and preventing the copper from oxidizing, which decreases conductivity. Once the clumping problem has been worked out, Wiley believes the conductivity of the copper nanowires will match that of silver nanowires and ITO.

Wiley, who has applied for a patent for his process, expects to see copper nanowires in commercial use in the not-too-distant future. He notes that there is already investment financing available for the development of transparent conductors based on silver nanowires.

"We think that using a material that is a hundred times cheaper will be even more attractive to venture capitalists, electronic companies and solar companies who all need these transparent electrodes," he said.
Story Source:
The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Duke University. The original article was written by Mary-Russell Roberson.

Journal Reference:
Aaron R. Rathmell, Stephen M. Bergin, Yi-Lei Hua, Zhi-Yuan Li, Benjamin J. Wiley. The Growth Mechanism of Copper Nanowires and Their Properties in Flexible, Transparent Conducting Films. Advanced Materials, 2010; DOI: 10.1002/adma.201000775

Hubble Catches Stars on the Move

Hubble Catches Stars on the Move: Surprising Signs of Unrest in Massive Star Cluster

With a mass of more than 10 000 suns packed into a volume with a diameter of a mere three light-years, the massive young star cluster in the nebula NGC 3603 is one of the most compact stellar clusters in the Milky Way [1] and an ideal place to test theories for their formation.

A team of astronomers from the Max-Planck Institute for Astronomy in Heidelberg and the University of Cologne led by Wolfgang Brandner (MPIA) wanted to track the movement of the cluster's many stars. Such a study could reveal whether the stars were in the process of drifting apart, or about to settle down.

The cluster, formally known as the NGC 3603 Young Cluster, is about 20 000 light-years from the Sun which makes these measurements extraordinarily difficult. It is necessary to compare images that were made years or even decades apart. The telescope and camera used must give very sharp images and be extremely stable over long periods.

Brandner and his colleagues realised that the Hubble Space Telescope was the best for the job. They found good data in the archives for the NGC 3603 cluster from a July 1997 observing run with the Wide Field Planetary Camera 2 (WFPC2), and then made their own follow-up observations in September 2007, using the same camera and the same set of filters as in the original observations. It then took the team two years of very careful analysis to extract reliable estimates for the motions of stars in the images.

Boyke Rochau (MPIA), the paper's lead author, who performed this analysis as part of his PhD work, explains: "Our measurements have a precision of 27 millionths of an arcsecond per year. This tiny angle corresponds to the apparent thickness of a human hair seen from a distance of 800 km."

In this laborious way, they were able to measure the precise speeds of more than 800 stars. About 50 were identified as foreground stars that are unrelated to the cluster, but more than 700 cluster stars of different masses and surface temperatures remained. The results for the motion of these cluster stars were surprising: this very massive star cluster has not yet settled down. Instead, the stars' velocities were independent of their mass and thus still reflect conditions from the time the cluster was formed, approximately one million years ago.

Stars are born when a gigantic cloud of gas and dust collapses. In cases such as the star forming region NGC 3603, where the cloud is unusually massive and compact, the process is particularly quick and intense. Most of the cloud's matter ends up concentrated inside hot young stars and the cluster keeps much of its initial gravitational attraction [2]. In the long term such massive compact star clusters may lead to the development of the huge balls of stars known as globular clusters, whose tightly packed stars remain held together by gravity for billions of years.

Wolfgang Brandner adds: "This is the first time we have been able to measure precise stellar motions in such a compact young star cluster." Team member Andrea Stolte from the University of Cologne adds: "This is key information for astronomers trying to understand how such clusters are formed, and how they evolve."


Notes


[1] For comparison: in our own immediate stellar neighbourhood, the same volume contains no more than a single star, namely the Sun. The NGC 3603 nebula is located in the central plane of our home galaxy's main disc, in a region called the Carina spiral arm.

[2] More usually the gas cloud is bigger and less massive and only about 10% of this mass ends up inside stars. The remaining gas is then blown away by the fierce ultraviolet light and stellar winds from the hot young stars. Once the interstellar matter is dispersed, the young star cluster has lost nearly 90% of its initial mass and has insufficient gravitational attraction to keep together. The stars in such typical clusters gradually drift apart.

Story Source:
The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by ESA/Hubble Information Centre, via EurekAlert!, a service of AAAS.

Journal Reference:
Boyke Rochau, Wolfgang Brandner, Andrea Stolte, Mario Gennaro, Dimitrios Gouliermis, Nicola Da Rio, Natalia Dzyurkevich, Thomas Henning. Internal dynamics and membership of the NGC 3603 young cluster from microarcsecond astrometry. The Astrophysical Journal, 2010; 716 (1): L90 DOI: 10.1088/2041-8205/716/1/L90

Who Are We Sharing the Planet With?

Who Are We Sharing the Planet With? Millions Less Species Than Previously Thought, New Calculations Suggest

New calculations reveal that the number of species on Earth is likely to be in the order of several million rather than tens of millions. The findings, from a University of Melbourne-led study, are based on a new method of estimating tropical insect species -- the largest and one of the most difficult groups on the planet to study -- having significant implications for conservation efforts.

The study's lead author, Dr Andrew Hamilton from The University of Melbourne's School of Land and Environment, said he was driven to more accurately calculate species numbers because humans were more certain of the number of stars in our galaxy, than fellow species on their own planet.

"Our understanding of species numbers has been clouded by one group of organisms, tropical arthropods, which include insects, spiders, mites and similar organisms. Estimates for this group have ranged from a few million up to 100 million," says Dr Hamilton.

Dr Hamilton and a team of international researchers have applied probability modelling techniques (models often used in financial risk estimates) to data from numerous previous studies. They found that there is a 90% chance that there is somewhere between 2 and 7 million tropical arthropod species, with a best estimate of 3.7 million.

With the addition of approximately 50,000 vertebrates (birds, mammals, amphibians and reptiles), 400,000 plants and possibly 1.3 million other organisms (mostly microorganisms, but excluding the bacteria for which we know very little about), this leaves us with a best estimate of around 5.5 million species with whom we share planet Earth. Furthermore, the study found that there is less than a 0.001% chance that the often-quoted value of at least 30 million total species is true.

"Our study is significant in this the International Year of Biodiversity, giving us a more realistic starting point for estimating extinction rates -- a profound hurdle in conservation biology. Extinction rates are typically estimated through knowing the area of habitat that has been lost, but to know how many species have been lost, we need to know how many were present in the first place. Obviously, if we are starting with less species, we may be worse off than we thought, and also be reducing the complexity of ecosystems even faster," says Dr Hamilton.

"The findings also mean that in spite of 250 years of taxonomic research, around 70% of arthropods await description."

"Many scientists have redone the calculations using different values and arrived at wildly different answers. Our work reran the same calculations, which use various inputs, such as the number of beetle species in the canopy of a typical rainforest tree, but accounted for uncertainty relating to these inputs and, therefore, uncertainty in the final estimation how many species there are."

The study will be published in the current edition of the international journal The American Naturalist.


Story Source:
The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Melbourne.

Journal Reference:
Andrew J. Hamilton, Yves Basset, Kurt K. Benke, Peter S. Grimbacher, Scott E. Miller, Vojtech Novotný, G. Allan Samuelson, Nigel E. Stork, George D. Weiblen, Jian D. L. Yen. Quantifying Uncertainty in Estimation of Tropical Arthropod Species Richness. The American Naturalist, 2010: 100510130432020 DOI: 10.1086/652998