Wednesday, December 16, 2009

Coconut-Carrying Octopus: Tool Use in an Invertebrate


Researchers have found that the veined octopus manages a behavioral trick called stilt walking, in which it can carry a coconut shell under its body while making its eight arms into stilts. (Credit: Roger Steene)

-----------------------------------------------------------------------------------------------------------------------------------------

Scientists once thought of tool use as a defining feature of humans. That's until examples of tool use came in from other primates, along with birds and an array of other mammals. Now, a report in the December 14th issue of Current Biology, a Cell Press publication, adds an octopus to the growing list of tool users.

The veined octopus under study manages a behavioral trick that the researchers call stilt walking. In it, the soft-bodied octopus spreads itself over stacked, upright coconut shell "bowls," makes its eight arms rigid, and raises the whole assembly to amble on eight "stilts" across the seafloor. The only benefit to the octopus's ungainly maneuver is to use the shells later as a shelter or lair, and that's what makes it wholly different from a hermit crab using the discarded shell of a snail.

"There is a fundamental difference between picking up a nearby object and putting it over your head as protection versus collecting, arranging, transporting (awkwardly), and assembling portable armor as required," said Mark Norman of the Museum Victoria in Australia.

Julian Finn, also of the Museum Victoria, said the initial discovery was completely serendipitous.

"While I have observed and videoed octopuses hiding in shells many times, I never expected to find an octopus that stacks multiple coconut shells and jogs across the seafloor carrying them," he said.

In recalling the first time that he saw this behavior, Finn added, "I could tell that the octopus, busy manipulating coconut shells, was up to something, but I never expected it would pick up the stacked shells and run away. It was an extremely comical sight -- I have never laughed so hard underwater."

After 500 diver hours spent "under the sea," the researchers observed the behavior of 20 veined octopuses. On four occasions, individuals traveled over considerable distances -- up to 20 meters -- while carrying stacked coconut shell halves beneath their body.

"Ultimately, the collection and use of objects by animals is likely to form a continuum stretching from insects to primates, with the definition of tools providing a perpetual opportunity for debate," the researchers concluded. "However, the discovery of this octopus tiptoeing across the sea floor with its prized coconut shells suggests that even marine invertebrates engage in behaviors that we once thought the preserve of humans."

The researchers include Julian K. Finn, Museum Victoria, in Melbourne, Australia, Zoology, La Trobe University, Bundoora, Australia, Tom Tregenza, University of Exeter, Cornwall Campus, Penryn, UK; and Mark D. Norman, Museum Victoria, in Melbourne, Australia.
Adapted from materials provided by Cell Press, via EurekAlert!, a service of AAAS.

Journal Reference:
Julian K. Finn, Tom Tregenza, Mark D. Norman. Defensive tool use in a coconut-carrying octopus. Current Biology, 2009; DOI: 10.1016/j.cub.2009.10.052

Theorists Propose a New Way to Shine -- And a New Kind of Star: 'Electroweak'

Dying, for stars, has just gotten more complicated.

For some stellar objects, the final phase before or instead of collapsing into a black hole may be what a group of physicists is calling an electroweak star.

Glenn Starkman, a professor of physics at Case Western Reserve University, together with former graduate students and post-docs De-Chang Dai and Dejan Stojkovic, now at the State University of New York in Buffalo, and Arthur Lue, at MIT's Lincoln Lab, offer a description of the structure of an electroweak star in a paper submitted to Physical Review Letters.

Ordinary stars are powered by the fusion of light nuclei into heavier ones -- such as hydrogen into helium in the center of our sun. Electroweak stars, they theorize, would be powered by the total conversion of quarks -- the particles that make up the proton and neutron building blocks of those nuclei -- into much lighter particles called leptons. These leptons include electrons, but especially elusive -- and nearly massless -- neutrinos.

"This is a process predicted by the well-tested Standard Model of particle physics," Starkman said. At ordinary temperatures it is so incredibly rare that it probably hasn't happened within the visible universe anytime in the last 10 billion years, except perhaps in the core of these electroweak stars and in the laboratories of some advanced alien civilizations, he said.

In their dying days, stars smaller than 2.1 times our sun's mass die and collapse into neutron stars -- objects dense enough that the neutrons and protons push against each other. More massive stars are thought to head toward collapse into a black hole. But at the extreme temperatures and densities that might be reached when a star begins to collapse into a black hole, electroweak conversion of quarks into leptons should proceed at a rapid rate, the scientists say.

The energy generated could halt the collapse, much as the energy generated by nuclear fusion prevents ordinary stars like the Sun from collapsing. In other words, an electroweak star is the possible next step before total collapse into a black hole. If the electroweak burning is efficient, it could consume enough mass to prevent what's left from ever becoming a black hole.

Most of the energy eventually emitted from electroweak stars is in the form of neutrinos, which are hard to detect. A small fraction comes out as light and this is where the electroweak star's signature will likely be found, Starkman, said. But, "To understand that small fraction, we have to understand the star better than we do."

And until they do, it's hard to know how we can tell electroweak stars from other stars.

There's time, however, to learn. The theorists have calculated that this phase of a star's life can last more than 10 million years -- a long time for us, though just an instant in the life of a star.

Adapted from materials provided by Case Western Reserve University, via EurekAlert!, a service of AAAS.

Journal Reference:
De-Chang Dai, Arthur Lue, Glenn Starkman, Dejan Stojkovic. Electroweak stars: how nature may capitalize on the standard model's ultimate fuel. Physical Review Letters, 2009; (submitted) [link]

Secrets of Mysterious 'Night-Shining' Clouds Unlocked


This image of Polar Mesospheric Clouds (PMC) from the Aeronomy of Ice in the Mesosphere Cloud Imaging and Particle Size (AIM-CIPS) instrument on July 14, 2009 in the northern polar region. The North Pole (90N) is in the center. Latitude bands of 80N, 70N, and 60N are also indicated by the light blue circles. (Credit: NASA)
-----------------------------------------------------------------------------------------------------------------------------------------
Secrets of Mysterious 'Night-Shining' Clouds Unlocked by NASA's AIM Satellite and Models

NASA's Aeronomy of Ice in the Mesosphere (AIM) satellite has captured five complete polar seasons of noctilucent (NLC) or "night-shining" clouds with an unprecedented horizontal resolution of 3 miles by 3 miles. Results show that the cloud season turns on and off like a "geophysical light bulb" and they reveal evidence that high altitude mesospheric "weather" may follow similar patterns as our ever-changing weather near the Earth's surface.

These findings were unveiled today at the Fall Meeting of the American Geophysical Union today in San Francisco.

The AIM measurements have provided the first comprehensive global-scale view of the complex life cycle of these clouds, also called Polar Mesospheric Clouds (PMCs), over three entire Northern Hemisphere and two Southern Hemisphere seasons revealing more about their formation, frequency and brightness and why they appear to be occurring at lower latitudes than ever before.

"The AIM findings have altered our previous understanding of why PMCs form and vary," stated AIM principal investigator Dr. James Russell III of Hampton University in Hampton, Va. "We have captured the brightest clouds ever observed and they display large variations in size and structure signifying a great sensitivity to the environment in which the clouds form. The cloud season abruptly turns on and off going from no clouds to near complete coverage in a matter of days with the reverse pattern occurring at the season end."

These bright "night-shining" clouds, which form 50 miles above Earth's surface, are seen by the spacecraft's instruments, starting in late May and lasting until late August in the north and from late November to late February in the south. The AIM satellite reports daily observations of the clouds at all longitudes and over a broad latitude range extending from 60 to 85 degrees in both hemispheres.

The clouds usually form at high latitudes during the summer of each hemisphere. They are made of ice crystals formed when water vapor condenses onto dust particles in the brutal cold of this region, at temperatures around minus 210 to minus 235 degrees Fahrenheit. They are called "night shining" clouds by observers on the ground because their high altitude allows them to continue reflecting sunlight after the sun has set below the horizon. They form a spectacular silvery blue display visible well into the night time.

Sophisticated multidimensional models have also advanced significantly in the last few years and together with AIM and other space and ground-based data have led to important advances in understanding these unusual and provocative clouds. The satellite data has shown that:
Temperature appears to control season onset, variability during the season, and season end. Water vapor is surely important but the role it plays in NLC variability is only now becoming more understood,
Large scale planetary waves in the Earth's upper atmosphere cause NLCs to vary globally, while shorter scale gravity waves cause the clouds to disappear regionally;
There is coupling between the summer and winter hemispheres: when temperature changes in the winter hemisphere, NLCs change correspondingly in the opposite hemisphere.

Computer models that include detailed physics of the clouds and couple the upper atmosphere environment where they occur with the lower regions of the atmosphere are being used to study the reasons the NLCs form and the causes for their variability. These models are able to reproduce many of the features found by AIM. Validation of the results using AIM and other data will help determine the underlying causes of the observed changes in NLCs.

The AIM results were produced by Mr. Larry Gordley and Dr. Mark Hervig and the Solar Occultation for Ice Experiment (SOFIE) team, Gats, Inc., Newport News, Va. and Dr. Cora Randall and the Cloud Imaging and Particle Size (CIPS) experiment team, University of Colorado, Laboratory for Atmospheric and Space Physics in Boulder and Dr. Scott Bailey, Va. Tech, Blacksburg, Va.; Modeling results were developed by Dr. Daniel Marsh of the National Center for Atmospheric Research in Boulder, Colorado and Professor Franz-Josef Lübken of the Leibniz-Institute of Atmospheric Physics, Kühlungsborn, Germany.

AIM is a NASA-funded SMall EXplorers (SMEX) mission. NASA's Goddard Space Flight Center manages the program for the agency's Science Mission Directorate at NASA Headquarters in Washington. The mission is led by the Principal Investigator from the Center for Atmospheric Sciences at Hampton University in VA. Instruments were built by the Laboratory for Atmospheric and Space Physics (LASP), University of Colorado, Boulder, and the Space Dynamics Laboratory, Utah State University. LASP also manages the AIM mission and controls the satellite. Orbital Sciences Corporation, Dulles, Va., designed, manufactured, and tested the AIM spacecraft, and provided the Pegasus launch vehicle.

For more information about the AIM mission, visit: http://www.nasa.gov/mission_pages/aim/news/nlc-secrets.html, http://www.nasa.gov/aim, and http://aim.hamptonu.edu

Adapted from materials provided by NASA/Goddard Space Flight Center.

Story of 4.5 Million-Year-Old Whale Found in Huelva


These are vertebra colonized by bivalves. (Credit: Esperante et al.)

----------------------------------------------------------------------------------------------------------------------------------------

In 2006, a team of Spanish and American researchers found the fossil remains of a whale, 4.5 million years old, in Bonares, Huelva. Now they have published, for the first time, the results of the decay and fossilisation process that started with the death of the young cetacean, possibly a baleen whale from the Mysticeti group.

This is not the first discovery of the partial fossil remains of a whale from the Lower Pliocene (five million years ago) in the Huelva Sands sedimentary formation, but it is the first time that the results of the processes of fossilisation and fossil deposition following the death of a whale have been published.

The work of this international group, published in the latest issue of Geologica Acta, is the first taphonomic (fossilisation process) study done on cetacean remains combined with other paleontological disciplines such as ichnology (the study of trace fossils).

"Once the whale was dead, its body was at the mercy of scavengers such as sharks, and we know that one of these voracious attacks resulted in one of its fins being pulled off and moved about ten metres. It remained in this position in the deposit studied," Fernando Muñiz, one of the study's authors and a researcher in the University of Huelva's "Tectonics and Paleontology" research group, currently working as a palaeontologist for the City Council of Lepe, in Huelva, said.

The researchers have described the fossil remains discovered in Bonares, Huelva, at an altitude of 80 metres above sea level and 24 kilometres from the sea, and have studied the main taxonomic characteristics and associated fauna. The team also created a paleoenvironmental model to explain how the skeleton -- which is incomplete apart from some pieces such as its three-metre-long hemimandibular jaw bones -- was deposited.

The results show that these remains came from a "juvenile whale that died and became buried on the sea floor, at a depth of around 30-50 metres, and were subject to intense activity by invertebrate and vertebrate scavengers (as can be seen from the presence of numerous shark teeth associated with the bones)," says Muñiz. Based on the remains studied, it is hard for the researchers to say whether the cause of death was illness, old age, or attack by a larger predator.

In terms of its taxonomic description, the researchers say this is "difficult," although the morphology of the scapula (shoulder blade) suggests it is "from the Balaenopteridae (rorqual) family, belonging to the group of baleen whales from the Mysticeti sub-order," says the paleontologist.

Dead bodies as a source of nutrients

The occasional presence of a cetacean corpse on the sea floor represents an exceptional provision of nutrients for various ecological communities. According to recent studies of current-day phenomena, four ecological phases associated with whales have been recognised "that can be partially recognised in the fossil record" -- the presence of mobile scavengers (sharks and bony fish), opportunists (especially polychaetes and crustaceans), sulphophilic extremophiles (micro organisms) and hard coral.

Once the bones deposited on the sea floor, free of organic material, were exposed, bivalve molluscs of the species Neopycnodonte cochlear colonised them. The presence of these bivalves suggests that the process to transform the biological remains after death was "relatively lengthy before it was definitively buried," explains the researcher.

"The fat and other elements resulting from the decomposition of the organic material would have enriched the sediment around and above the body, and this can be seen in the numerous burrowing structures in this sediment, created by endobiotic organisms, such as crustaceans and polychaete annelids," adds Muñíz. The bones were also "used," not only as a base to which these could attach themselves, but also as food.

According to the paleontologists, the presence of bioerosion structures indicates that the contents of the bones were used as an extraordinary source of nutrients, possibly by decapod crustaceans. This would be the first known evidence in the fossil record of a whale bone being consumed by decapod crustaceans with osteophagic feeding habits. The material is currently undergoing in-depth analysis by the authors of the study.

Adapted from materials provided by FECYT - Spanish Foundation for Science and Technology, via EurekAlert!, a service of AAAS.

Journal Reference:
Esperante, R., Muñiz Guinea, F., and Nick, K.E. Taphonomy of a Mysticeti whale in the Lower Pliocene Huelva Sands Formation (Southern Spain). Geologica Acta, 7(4): 489-504, December 2009

Close-Up Photos of Dying Star Show Our Sun's Fate


Chi Cygni, shown in this artist's conception, is a red giant star nearing the end of its life. As it runs out of fuel, it pulses in and out, beating like a giant heart and ejecting shells of material. Observations by the Infrared Optical Telescope Array found that, at minimum radius, Chi Cygni shows marked inhomogeneities due to roiling "hotspots" on its surface. (Credit: ESO/L. Calçada)

-----------------------------------------------------------------------------------------------------------------------------------------

About 550 light-years from Earth, a star like our Sun is writhing in its death throes. Chi Cygni has swollen in size to become a red giant star so large that it would swallow every planet out to Mars in our solar system. Moreover, it has begun to pulse dramatically in and out, beating like a giant heart. New close-up photos of the surface of this distant star show its throbbing motions in unprecedented detail.

"This work opens a window onto the fate of our Sun five billion years from now, when it will near the end of its life," said lead author Sylvestre Lacour of the Observatoire de Paris.

As a sunlike star ages, it begins to run out of hydrogen fuel at its core. Like a car running out of gas, its "engine" begins to splutter. On Chi Cygni, we see those splutterings as a brightening and dimming, caused by the star's contraction and expansion. Stars at this life stage are known as Mira variables after the first such example, Mira "the Wonderful," discovered by David Fabricius in 1596. As it pulses, the star is puffing off its outer layers, which in a few hundred thousand years will create a beautifully gleaming planetary nebula.

Chi Cygni pulses once every 408 days. At its smallest diameter of 300 million miles, it becomes mottled with brilliant spots as massive plumes of hot plasma roil its surface. (Those spots are like the granules on our Sun's surface, but much larger.) As it expands, Chi Cygni cools and dims, growing to a diameter of 480 million miles -- large enough to engulf and cook our solar system's asteroid belt.

For the first time, astronomers have photographed these dramatic changes in detail. They reported their work in the December 10 issue of The Astrophysical Journal.

"We have essentially created an animation of a pulsating star using real images," stated Lacour. "Our observations show that the pulsation is not only radial, but comes with inhomogeneities, like the giant hotspot that appeared at minimum radius."

Imaging variable stars is extremely difficult, for two main reasons. The first reason is that such stars hide within a compact and dense shell of dust and molecules. To study the stellar surface within the shell, astronomers observe the stars at a specific wavelength of infrared light. Infrared allows astronomers to see through the shell of molecules and dust, like X-rays enable physicians to see bones within the human body.

The second reason is that these stars are very far away, and thus appear very small. Even though they are huge compared to the Sun, the distance makes them appear no larger than a small house on the moon as seen from Earth. Traditional telescopes lack the proper resolution. Consequently, the team turned to a technique called interferometry, which involves combining the light coming from several telescopes to yield resolution equivalent to a telescope as large as the distance between them.

They used the Smithsonian Astrophysical Observatory's Infrared Optical Telescope Array, or IOTA, which was located at Whipple Observatory on Mount Hopkins, Arizona.

"IOTA offered unique capabilities," said co-author Marc Lacasse of the Harvard-Smithsonian Center for Astrophysics (CfA). "It allowed us to see details in the images which are about 15 times smaller than can be resolved in images from the Hubble Space Telescope."

The team also acknowledged the usefulness of the many observations contributed annually by amateur astronomers worldwide, which were provided by the American Association of Variable Star Observers (AAVSO).

In the forthcoming decade, the prospect of ultra-sharp imaging enabled by interferometry excites astronomers. Objects that, until now, appeared point-like are progressively revealing their true nature. Stellar surfaces, black hole accretion disks, and planet forming regions surrounding newborn stars all used to be understood primarily through models. Interferometry promises to reveal their true identities and, with them, some surprises.

Adapted from materials provided by Harvard-Smithsonian Center for Astrophysics.

Black Carbon Deposits on Himalayan Ice Threaten Earth's 'Third Pole'


To better understand the role that black soot has on glaciers, researchers trekked high into the Himalayas to collect ice cores that contain a record of soot deposition that spans back to the 1950s.

(Credit: Institute of Tibetan Plateau Research, Chinese Academy of Sciences)

-----------------------------------------------------------------------------------------------------------------------------------------

Black soot deposited on Tibetan glaciers has contributed significantly to the retreat of the world's largest non-polar ice masses, according to new research by scientists from NASA and the Chinese Academy of Sciences. Soot absorbs incoming solar radiation and can speed glacial melting when deposited on snow in sufficient quantities.

Temperatures on the Tibetan Plateau -- sometimes called Earth's "third pole" -- have warmed by 0.3°C (0.5°F) per decade over the past 30 years, about twice the rate of observed global temperature increases. New field research and ongoing quantitative modeling suggests that soot's warming influence on Tibetan glaciers could rival that of greenhouse gases.

"Tibet's glaciers are retreating at an alarming rate," said James Hansen, coauthor of the study and director of NASA's Goddard Institute for Space Studies (GISS) in New York City. "Black soot is probably responsible for as much as half of the glacial melt, and greenhouse gases are responsible for the rest."

"During the last 20 years, the black soot concentration has increased two- to three-fold relative to its concentration in 1975," said Junji Cao, a researcher from the Chinese Academy of Sciences in Beijing and a coauthor of the paper.

The study was published December 7th in the Proceedings of the National Academy of Sciences.

"Fifty percent of the glaciers were retreating from 1950 to 1980 in the Tibetan region; that rose to 95 percent in the early 21st century," said Tandong Yao, director of the Chinese Academy's Institute of Tibetan Plateau Research. Some glaciers are retreating so quickly that they could disappear by mid-century if current trends continue, the researchers suggest.

Since melt water from Tibetan glaciers replenishes many of Asia's major rivers -- including the Indus, Ganges, Yellow, and Brahmaputra -- such losses could have a profound impact on the billion people who rely on the rivers for fresh water. While rain and snow would still help replenish Asian rivers in the absence of glaciers, the change could hamper efforts to manage seasonal water resources by altering when fresh water supplies are available in areas already prone to water shortages.

Researchers led by Baiqing Xu of the Chinese Academy drilled and analyzed five ice cores from various locations across the Tibetan Plateau, looking for black carbon (a key component of soot) as well as organic carbon. The cores support the hypothesis that black soot amounts in the Himalayan glaciers correlate with black carbon emissions in Europe and South Asia.

At Zuoqiupu glacier -- a bellwether site on the southern edge of the plateau and downwind from the Indian subcontinent -- black soot deposition increased by 30 percent between 1990 and 2003. The rise in soot levels at Zuoqiupu follows a dip that followed the enacting of clean air regulations in Europe in the 1970s.

Most soot in the region comes from diesel engines, coal-fired power plants, and outdoor cooking stoves. Many industrial processes produce both black carbon and organic carbon, but often in different proportions. Burning diesel fuel produces mainly black carbon, for example, while burning wood produces mainly organic carbon. Since black carbon is darker and absorbs more radiation, it's thought to have a stronger warming effect than organic carbon.

To refine this emerging understanding of soot's impact on glaciers, scientists are striving to gather even more robust measurements. "We can't expect this study to clarify the effect of black soot on the melting of Tibetan snow and glaciers entirely," said Cao. "Additional work that looks at albedo measurements, melting rate, and other types of reconnaissance is also needed."

For example, scientists are using satellite instruments such as the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the NASA satellites Terra and Aqua to enhance understanding of the region's albedo. And a new NASA climate satellite called Glory, which will launch late in 2010, will carry a new type of aerosol sensor that should be able to distinguish between aerosol types more accurately than previous instruments.

"Reduced black soot emissions, in addition to reduced greenhouse gases, may be required to avoid demise of Himalayan glaciers and retain the benefits of glaciers for seasonal fresh water supplies," Hansen said.

Adapted from materials provided by NASA/Goddard Space Flight Center, via EurekAlert!, a service of AAAS.

More Evidence of Great Quake Danger to Seattle


Seattle skyline with Mount Rainier in the background.
(Credit: iStockphoto/Natalia Bratslavsky)
-----------------------------------------------------------------------------------------------------------------------------------------
Tremors Between Slip Events: More Evidence of Great Quake Danger to Seattle

For most of a decade, scientists have documented unfelt and slow-moving seismic events, called episodic tremor and slip, showing up in regular cycles under the Olympic Peninsula of Washington state and Vancouver Island in British Columbia. They last three weeks on average and release as much energy as a magnitude 6.5 earthquake.

Now scientists have discovered more small events, lasting one to 70 hours, which occur in somewhat regular patterns during the 15-month intervals between episodic tremor and slip events.

"There appear to be tremor swarms that repeat, both in terms of their duration and in where they are. We haven't seen enough yet to say whether they repeat in regular time intervals," said Kenneth Creager, a University of Washington professor of Earth and space sciences.

"This continues to paint the picture of the possibility that a megathrust earthquake can occur closer to the Puget Sound region than was thought just a few years ago," he said.

The phenomenon, which Creager discussed during a presentation at the annual meeting of the American Geophysical Union, is the latest piece of evidence as scientists puzzle out exactly what is happening deep below the surface near Washington state's populous Interstate 5 corridor. He noted that the work shows that tremor swarms follow a size distribution similar to earthquakes, with larger events occurring much less frequently than small events.

The Cascadia subduction zone, where the Juan de Fuca tectonic plate dips beneath the North American plate, runs just off the Pacific coast from northern California to the northern edge of Vancouver Island in British Columbia. It can be the source of massive megathrust earthquakes on the order of magnitude 9 about every 500 years. The last one occurred in 1700.

The fault along the central Washington coast, where the Pacific and Juan de Fuca plates are locked together most of the time but break apart from each other during a powerful megathrust earthquake, was believed to lie 80 miles or more from the Seattle area. But research has shown that the locked zone extends deeper and farther east than previously thought, bringing the edge of the rupture zone beneath the Olympic Mountains, perhaps 40 miles closer to the Seattle area. It is this locked area that can rupture to produce a megathrust earthquake that causes widespread heavy damage, comparable to the 2004 Indian Ocean earthquake or the great Alaska quake of 1964.

Episodic tremor and slip events appear to occur at the interface of the plates as they gradually descend beneath the surface, at depths of about 19 to 28 miles. The smaller tremors between slip episodes, what Creager refers to as inter-episodic tremor and slip events, appear to occur at the interface of the plates a little farther east and a few miles deeper.

"There's a whole range of events that take place on or near the plate interface. Each improvement in data collection and processing reveals new discoveries," Creager said.

Episodic tremor and slip events often begin in the area of Olympia, Wash., and move northward to southern Vancouver Island over a three-week period, but scientists have yet to pin down such patterns among the smaller tremors that occur between the slip events.

Because the two tectonic plates are locked together, stress builds at their interface as they collide with each other at a rate of about 4 centimeters (1.6 inches) a year. The slip events and smaller tremors ease some of that stress locally, Creager said, but they don't appear to account for all of it.

"Each one of these slip events puts more stress on the area of the plate boundary where megathrust earthquakes occur, which is shallower and farther to the west, bringing you closer to the next big event," he said. "There's nothing to tell you which one will be the trigger."

Since the slip events and intervening small tremors don't accommodate all of the stress built up on the fault, scientists are getting a better idea of just what the hazard from a megathrust earthquake is in the Seattle area. One benefit from that is the ability to revise building codes so structures will be better able to withstand the immense shaking from a great quake, particularly if the source is substantially closer to the city than it was previously expected to be.

"We'd like to go back and see how much slip has occurred in these slip events, compared to how much should have occurred," Creager said. "Then we'll know how much of that slip will have to be accommodated in a megathrust earthquake, or through other processes."

Adapted from materials provided by University of Washington.

Icy Moons of Saturn and Jupiter May Have Conditions Needed for Life


This image captured by NASA's Cassini spacecraft shows jets of ice particles, water vapor, and trace organic compounds shooting from the surface of Saturn's moon Enceladus. (Credit: NASA/JPL/Space Science Institute)

-------------------------------------------------------------------------------------------------------------------------------------------

Scientists once thought that life could originate only within a solar system's "habitable zone," where a planet would be neither too hot nor too cold for liquid water to exist on its surface. But according to planetary scientist Francis Nimmo, evidence from recent NASA missions suggests that conditions necessary for life may exist on the icy satellites of Saturn and Jupiter.

"If these moons are habitable, it changes the whole idea of the habitable zone," said Nimmo, a professor of Earth and planetary sciences at UC Santa Cruz. "It changes our thinking about how and where we might find life outside of the solar system."

Nimmo discussed the impact of ice dynamics on the habitability of the moons of Saturn and Jupiter on Tuesday, December 15, at the annual meeting of the American Geophysical Union in San Francisco.

Jupiter's moon Europa and Saturn's moon Enceladus, in particular, have attracted attention because of evidence that oceans of liquid water may lie beneath their icy surfaces. This evidence, plus discoveries of deep-sea hydrothermal vent communities on Earth, suggests to some that these frozen moons just might harbor life.

"Liquid water is the one requirement for life that everyone can agree on," Nimmo said.

The icy surfaces may insulate deep oceans, shift and fracture like tectonic plates, and mediate the flow of material and energy between the moons and space.

Several lines of evidence support the presence of subsurface oceans on Europa and Enceladus, Nimmo said. In 2000, for example, NASA's Galileo spacecraft measured an unusual magnetic field around Europa that was attributed to the presence of an ocean beneath the moon's surface. On Enceladus, the Cassini spacecraft discovered geysers shooting ice crystals a hundred miles above the surface, which also suggests at least pockets of subsurface water, Nimmo said (see earlier story).

Liquid water isn't easy to find in the cold expanses beyond Earth's orbit. But according to Nimmo, tidal forces could keep subsurface oceans from freezing up. Europa and Enceladus both have eccentric orbits that bring them alternately close to and then far away from their respective planets. These elongated orbits create ebbs and flows of gravitational energy between the planets and their satellites.

"A moon like Enceladus is getting squeezed and stretched and squeezed and stretched," Nimmo said.

The extent to which this squeezing and stretching transforms into heat remains unclear, he said. Tidal forces likely shift plates in the lunar cores, creating friction and geothermal energy. This energy may also rub surface ice against itself at the sites of deep ice fissures, creating heat and melting, according to Nimmo. Enceladus's geysers appear to originate from these shifting faults, and the thin lines running along Europa's surface suggest geologically active plates, he said.

A frozen outer layer may be crucial to maintaining oceans that could harbor life on these moons. The icy surfaces may shield the oceans from the frigidity of space and from radiation harmful to living organisms.

"If you want to have life, you want the ocean to last a long time," Nimmo said. "The ice above acts like an insulating blanket."

Enceladus is so small and its ice so thin that scientists expect its oceans to freeze periodically, making habitability less likely, Nimmo said. Europa, however, is the perfect size to heat its oceans efficiently. It is larger than Enceladus but smaller than moons such as Ganymede, which has thick ice surrounding its core and blocking communication with the exterior. If liquid water exists on Ganymede, it may be trapped between layers of ice that separate it from both the core and the surface.

The core and the surface of these moons are both potential sources of the chemical building blocks needed for life. Solar radiation and comet impacts leave a chemical film on the surfaces. To sustain living organisms, these chemicals would have to migrate to the subsurface oceans, and this can occur periodically around ice fissures on moons with relatively thin ice shells like Europa and Enceladus. Organic molecules and minerals may also stream out of their cores, Nimmo said. These nutrients could support communities like those seen around hydrothermal vents on Earth.

Nimmo cautioned that being habitable is no guarantee that a planetary body is actually inhabited. It is unlikely that we will find life elsewhere in our solar system, despite all the time and resources devoted to the search, he said. But such a discovery would certainly be worth the effort.

"I think pretty much everyone can agree that finding life anywhere else in the solar system would be the scientific discovery of the millennium," Nimmo said.

Adapted from materials provided by University of California - Santa Cruz. Original article written by Daniel Strain.

Tuesday, December 15, 2009

INTERNATIONAL EARTH SCIENCE OLYMPIAD

Geological Society of India

INTERNATIONAL EARTH SCIENCE OLYMPIAD – Entrance Test

 

The Geological Society of India will be conducting an objective-type Entrance Test (ET) to select about 20 students to attend a Training Camp in Earth Sciences at Bengalooru during May 2010. Four students will be finally chosen from among the Training Camp participants who will represent India at the 4th International Earth Science Olympiad (IESO), September 19-28, 2010, Yogyakarta, Indonesia.

 

TEST DETAILS:

Date and Time                 : January 24 , 2010 at 10.30 A.M.

Duration                           : ~ 1½ hours

Language                         : English.

Type of questions            : Mainly objective, but some may require 1 – 2 sentence answers.

Syllabus with weightage : Geosphere (45), Atmosphere (20), Hydrosphere (15) and Astronomy (20)

Syllabus available at        : http://www.ieso2009.tw/home/2a_scient.html

Centres                             : Ahmedabad, Bengalooru, Bhubaneshwar, Chandigarh, Chennai, Dharwad, Goa, Guwahati, Hissar (Punjab), Hyderabad, Kolkota, Lucknow, Mangalore, Mumbai, New Delhi, Patna, Pune, Shillong, Thiruvananthapuram, Varanasi and Vishakhapatnam.

Eligibility                          : Students of X and XI standards who were born between 15th September 1992 and 15th September 1995, and who have not participated and won a prize in an earlier IESO.

Application format          : Name; Date of birth; Sex; School address; Address for correspondence; Email address; Phone no.; Test centre opted: 1st preference, 2nd preference; Applicant’s signature.

                                             Enclsoures: Two passport size photos and a certificate from the Principal/ Head Master stating that the candidate is studying in X/XI standard.

Applications should be sent to Dr R. Shankar, Foreign Secretary, Geological Society of India, C/O Department of Marine Geology, Mangalore University, MANGALAGANGOTRI (DK)  574 199 [Email: rshankar_1@yahoo.com; Phone: 0824-2284465; 0-9916823885]

Last date to receive         : December 20, 2009.

      applications

Certificates and Awards: The 20 students selected for the Training Camp in Earth Sciences will be awarded Certificates of Merit. The four students selected for competing at the IESO will be awarded Certificates of Merit and Cash Prizes.

Black Hole Found to Be Much Closer to Earth Than Previously Thought


An international team of astronomers has accurately measured the distance from Earth to a black hole for the first time.

(Credit: Image courtesy of SRON Netherlands Institute for Space Research

--------------------------------------------------------------------------------------------------

An international team of astronomers has accurately measured the distance from Earth to a black hole for the first time. Without needing to rely on mathematical models the astronomers came up with a distance of 7800 light years, much closer than had been assumed until now. The researchers achieved this breakthrough by measuring the radio emissions from the black hole and its associated dying star.

Due to the much lower error margin (<6%),>

Story Source:
Adapted from materials provided by SRON Netherlands Institute for Space Research, via AlphaGalileo.

Prussian Blue Salt Linked to Origin of Life


This is Prussian blue. This salt could cause substances essential for life. (Credit: Nagem R.)

---------------------------------------------------------------------------------------------------

A team of researchers from the Astrobiology Centre (INTA-CSIC) has shown that hydrogen cyanide, urea and other substances considered essential to the formation of the most basic biological molecules can be obtained from the salt Prussian blue. In order to carry out this study, published in the journal Chemistry & Biodiversity, the scientists recreated the chemical conditions of the early Earth.

"We have shown that when Prussian blue is dissolved in ammoniac solutions it produces hydrogen cyanide, a substance that could have played a fundamental role in the creation of the first bio-organic molecules, as well as other precursors to the origin of life, such as urea, dimethylhydantoin and lactic acid," Marta Ruiz Bermejo, lead author of the study and a researcher at the Astrobiology Centre (CSIC-INTA), said.

Urea is considered to be an important reagent in synthesising pyrimidines (the derivatives of which form part of the nucleic acids DNA and RNA), and it has been suggested that hydantoins could be the precursors of peptides and amino acids (the components of proteins), while lactic acid is also of biological interest because, along with malic acid, it can play a role in electron donor-recipient systems.

The researcher and her team have proved that these and other compounds originate from the cyanide liberated by the salt Prussian blue (the name of which refers to the dye used in the uniforms of the Prussian Army) when it is subjected for several days to conditions of pH12 and relatively high temperatures (70-150ºC) in a damp, oxygen-free ammoniac environment, similar to early conditions on Earth. The results of the study have been published recently in the journal Chemistry & Biodiversity.

"In addition, when Prussian blue decomposes in this ammoniac, anoxic environment, this complex salt, called iron (III) hexacyanoferrate (II), also turns out to be an excellent precursor of hematite, the most stable and commonly found form of iron (III) oxide on the surface of the Earth," explains Ruiz Bermejo.

Hematite is related to the so-called Banded Iron Formations (BIF), the biological or geological origin of which is the source of intense debate among scientists. The oldest of these formations, more than two billion years old, have been found in Australia.

The researchers have confirmed in other studies that Prussian blue can be obtained in prebiotic conditions (from iron ions in methane atmosphere conditions with electrical discharges). The synthesis of this salt and its subsequent transformation into hematite offers an alternative model to explain the formation of the banded iron in abiotic conditions in the absence of oxygen.

Ruiz Bermejo concludes that Prussian blue "could act as a carbon concentrator in the prebiotic hydrosphere, and that its wet decomposition in anoxic conditions could liberate hydrogen cyanide and cyanogen, with the subsequent formation of organic molecules and iron oxides."

"We have shown that when Prussian blue is dissolved in ammoniac solutions it produces hydrogen cyanide, a substance that could have played a fundamental role in the creation of the first bio-organic molecules, as well as other precursors to the origin of life, such as urea, dimethylhydantoin and lactic acid," Marta Ruiz Bermejo, lead author of the study and a researcher at the Astrobiology Centre (CSIC-INTA), said.

Urea is considered to be an important reagent in synthesising pyrimidines (the derivatives of which form part of the nucleic acids DNA and RNA), and it has been suggested that hydantoins could be the precursors of peptides and amino acids (the components of proteins), while lactic acid is also of biological interest because, along with malic acid, it can play a role in electron donor-recipient systems.

The researcher and her team have proved that these and other compounds originate from the cyanide liberated by the salt Prussian blue (the name of which refers to the dye used in the uniforms of the Prussian Army) when it is subjected for several days to conditions of pH12 and relatively high temperatures (70-150ºC) in a damp, oxygen-free ammoniac environment, similar to early conditions on Earth. The results of the study have been published recently in the journal Chemistry & Biodiversity.

"In addition, when Prussian blue decomposes in this ammoniac, anoxic environment, this complex salt, called iron (III) hexacyanoferrate (II), also turns out to be an excellent precursor of hematite, the most stable and commonly found form of iron (III) oxide on the surface of the Earth," explains Ruiz Bermejo.

Hematite is related to the so-called Banded Iron Formations (BIF), the biological or geological origin of which is the source of intense debate among scientists. The oldest of these formations, more than two billion years old, have been found in Australia.

The researchers have confirmed in other studies that Prussian blue can be obtained in prebiotic conditions (from iron ions in methane atmosphere conditions with electrical discharges). The synthesis of this salt and its subsequent transformation into hematite offers an alternative model to explain the formation of the banded iron in abiotic conditions in the absence of oxygen.

Ruiz Bermejo concludes that Prussian blue "could act as a carbon concentrator in the prebiotic hydrosphere, and that its wet decomposition in anoxic conditions could liberate hydrogen cyanide and cyanogen, with the subsequent formation of organic molecules and iron oxides."

First Super-Earths Discovered Orbiting Sun-Like Stars


This image from a simulation of atmospheric flow shows temperature patterns on one of the newly discovered planets (61Virb), which is hot enough that it glows with its own thermal emission. A movie of the simulation is posted at the bottom of this story, showing global atmospheric flow for one full orbit of the planet around its star. (Credit: J. Langton, Principia College)

---------------------------------------------------------------------------------------------------

An international team of planet hunters has discovered as many as six low-mass planets around two nearby Sun-like stars, including two "super-Earths" with masses 5 and 7.5 times the mass of Earth. The researchers, led by Steven Vogt of the University of California, Santa Cruz, and Paul Butler of the Carnegie Institution of Washington, said the two "super-Earths" are the first ones found around Sun-like stars.

"These detections indicate that low-mass planets are quite common around nearby stars. The discovery of potentially habitable nearby worlds may be just a few years away," said Vogt, a professor of astronomy and astrophysics at UCSC.

The team found the new planet systems by combining data gathered at the W. M. Keck Observatory in Hawaii and the Anglo-Australian Telescope (AAT) in New South Wales, Australia. Two papers describing the new planets have been accepted for publication in the Astrophysical Journal.

Three of the new planets orbit the bright star 61 Virginis, which can be seen with the naked eye under dark skies in the Spring constellation Virgo. Astronomers and astrobiologists have long been fascinated with this particular star, which is only 28 light-years away. Among hundreds of our nearest stellar neighbors, 61 Vir stands out as being the most nearly similar to the Sun in terms of age, mass, and other essential properties. Vogt and his collaborators have found that 61 Vir hosts at least three planets, with masses ranging from about 5 to 25 times the mass of Earth.

Recently, a separate team of astronomers used NASA's Spitzer Space Telescope to discover that 61 Vir also contains a thick ring of dust at a distance roughly twice as far from 61 Vir as Pluto is from our Sun. The dust is apparently created by collisions of comet-like bodies in the cold outer reaches of the system.

"Spitzer's detection of cold dust orbiting 61 Vir indicates that there's a real kinship between the Sun and 61 Vir," said Eugenio Rivera, a postdoctoral researcher at UCSC. Rivera computed an extensive set of numerical simulations to find that a habitable Earth-like world could easily exist in the as-yet unexplored region between the newly discovered planets and the outer dust disk.

According to Vogt, the planetary system around 61 Vir is an excellent candidate for study by the new Automated Planet Finder (APF) Telescope recently constructed at Lick Observatory on Mount Hamilton near San Jose. "Needless to say, we're very excited to continue monitoring this system using APF," said Vogt, who is the principal investigator for the APF and is building a spectrometer for the new telescope that is optimized for finding planets.

The second new system found by the team features a 7.5-Earth-mass planet orbiting HD 1461, another near-perfect twin of the Sun located 76 light-years away. At least one and possibly two additional planets also orbit the star. Lying in the constellation Cetus, HD 1461 can be seen with the naked eye in the early evening under good dark-sky conditions.

The 7.5-Earth-mass planet, assigned the name HD 1461b, has a mass nearly midway between the masses of Earth and Uranus. The researchers said they cannot tell yet if HD 1461b is a scaled-up version of Earth, composed largely of rock and iron, or whether, like Uranus and Neptune, it is composed mostly of water.

According to Butler, the new detections required state-of-the-art instruments and detection techniques. "The inner planet of the 61 Vir system is among the two or three lowest-amplitude planetary signals that have been identified with confidence," he said. "We've found there is a tremendous advantage to be gained from combining data from the AAT and Keck telescopes, two world-class observatories, and it's clear that we'll have an excellent shot at identifying potentially habitable planets around the very nearest stars within just a few years."

The 61 Vir and HD 1461 detections add to a slew of recent discoveries that have upended conventional thinking regarding planet detection. In the past year, it has become evident that planets orbiting the Sun's nearest neighbors are extremely common. According to Butler, current indications are that fully one-half of nearby stars have a detectable planet with mass equal to or less than Neptune's.

The Lick-Carnegie Exoplanet Survey Team led by Vogt and Butler uses radial velocity measurements from ground-based telescopes to detect the "wobble" induced in a star by the gravitational tug of an orbiting planet. The radial-velocity observations were complemented with precise brightness measurements acquired with robotic telescopes in Arizona by Gregory Henry of Tennessee State University.

"We don't see any brightness variability in either star," said Henry. "This assures us that the wobbles really are due to planets and not changing patterns of dark spots on the stars."

Due to improvements in equipment and observing techniques, these ground-based methods are now capable of finding Earth-mass objects around nearby stars, according to team member Gregory Laughlin, professor of astronomy and astrophysics at UCSC.

"It's come down to a neck-and-neck race as to whether the first potentially habitable planets will be detected from the ground or from space," Laughlin said. "A few years ago, I'd have put my money on space-based detection methods, but now it really appears to be a toss-up. What is truly exciting about the current ground-based radial velocity detection method is that it is capable of locating the very closest potentially habitable planets."

The Lick-Carnegie Exoplanet Survey Team has developed a publicly available tool, the Systemic Console, which enables members of the public to search for the signals of extrasolar planets by exploring real data sets in a straightforward and intuitive way. This tool is available online at www.oklo.org.

Journal Reference:
Hugh R. Jones, R. Paul Butler, C. Tinney, Simon O%u2019Toole, Rob Wittenmyer, Gregory W. Henry, Stefano Meschiari, Steve Vogt, Euge- Nio Rivera, Greg Laughlin, Brad D. Carter, Jeremy Bailey, James S. Jenkins. A long-period planet orbiting a nearby Sun-like star. Astrophysical Journal, (in press)

Yellowstone's Plumbing


Seismic imaging was used by University of Utah scientists to construct this 3-D picture of the Yellowstone hotspot plume of hot and molten rock that feeds the shallower magma chamber (not shown) beneath Yellowstone National Park, outlined in green at the surface, or top of the illustration. The Yellowstone caldera, or giant volcanic crater, is outlined in red. State boundaries are shown in black. The park, caldera and state boundaries also are projected to the bottom of the picture to better illustrate the plume's tilt. Researchers believe "blobs" of hot rock float off the top of the plume, then rise to recharge the magma chamber located 3.7 miles to 10 miles beneath the surface at Yellowstone. The illustration also shows a region of warm rock extending southwest from near the top of the plume. It represents the eastern Snake River Plain, where the Yellowstone hotspot triggered numerous cataclysmic caldera eruptions before the plume started feeding Yellowstone 2.05 million years ago. (Credit: University of Utah)
----------------------------------------------------------------------------------------------------
Yellowstone's Plumbing Reveals Plume of Hot and Molten Rock 410 Miles Deep

The most detailed seismic images yet published of the plumbing that feeds the Yellowstone supervolcano shows a plume of hot and molten rock rising at an angle from the northwest at a depth of at least 410 miles, contradicting claims that there is no deep plume, only shallow hot rock moving like slowly boiling soup.

A related University of Utah study used gravity measurements to indicate the banana-shaped magma chamber of hot and molten rock a few miles beneath Yellowstone is 20 percent larger than previously believed, so a future cataclysmic eruption could be even larger than thought.

The study's of Yellowstone's plume also suggests the same "hotspot" that feeds Yellowstone volcanism also triggered the Columbia River "flood basalts" that buried parts of Oregon, Washington state and Idaho with lava starting 17 million years ago.

Those are key findings in four National Science Foundation-funded studies in the latest issue of the Journal of Volcanology and Geothermal Research. The studies were led by Robert B. Smith, research professor and professor emeritus of geophysics at the University of Utah and coordinating scientist for the Yellowstone Volcano Observatory.

"We have a clear image, using seismic waves from earthquakes, showing a mantle plume that extends from beneath Yellowstone,'' Smith says.

The plume angles downward 150 miles to the west-northwest of Yellowstone and reaches a depth of at least 410 miles, Smith says. The study estimates the plume is mostly hot rock, with 1 percent to 2 percent molten rock in sponge-like voids within the hot rock.

Some researchers have doubted the existence of a mantle plume feeding Yellowstone, arguing instead that the area's volcanic and hydrothermal features are fed by convection -- the boiling-like rising of hot rock and sinking of cooler rock -- from relatively shallow depths of only 185 miles to 250 miles.


The Hotspot: A Deep Plume, Blobs and Shallow Magma


Some 17 million years ago, the Yellowstone hotspot was located beneath the Oregon-Idaho-Nevada border region, feeding a plume of hot and molten rock that produced "caldera" eruptions -- the biggest kind of volcanic eruption on Earth.

As North America slid southwest over the hotspot, the plume generated more than 140 huge eruptions that produced a chain of giant craters -- calderas -- extending from the Oregon-Idaho-Nevada border northeast to the current site of Yellowstone National Park, where huge caldera eruptions happened 2.05 million, 1.3 million and 642,000 years ago.

These eruptions were 2,500, 280 and 1,000 times bigger, respectively, than the 1980 eruption of Mount St. Helens. The eruptions covered as much as half the continental United States with inches to feet of volcanic ash. The Yellowstone caldera, 40 miles by 25 miles, is the remnant of that last giant eruption.

The new study reinforces the view that the hot and partly molten rock feeding volcanic and geothermal activity at Yellowstone isn't vertical, but has three components:
The 45-mile-wide plume that rises through Earth's upper mantle from at least 410 miles beneath the surface. The plume angles upward to the east-southeast until it reaches the colder rock of the North American crustal plate, and flattens out like a 300-mile-wide pancake about 50 miles beneath Yellowstone. The plume includes several wider "blobs" at depths of 355 miles, 310 miles and 265 miles.

"This conduit is not one tube of constant thickness," says Smith. "It varies in width at various depths, and we call those things blobs."
A little-understood zone, between 50 miles and 10 miles deep, in which blobs of hot and partly molten rock break off of the flattened top of the plume and slowly rise to feed the magma reservoir directly beneath Yellowstone National Park.
A magma reservoir 3.7 miles to 10 miles beneath the Yellowstone caldera. The reservoir is mostly sponge-like hot rock with spaces filled with molten rock.

"It looks like it's up to 8 percent or 15 percent melt," says Smith. "That's a lot."

Researchers previously believed the magma chamber measured roughly 6 to 15 miles from southeast to northwest, and 20 or 25 miles from southwest to northeast, but new measurements indicate the reservoir extends at least another 13 miles outside the caldera's northeast boundary, Smith says.

He says the gravity and other data show the magma body "is an elongated structure that looks like a banana with the ends up. It is a lot larger than we thought -- I would say about 20 percent [by volume]. This would argue there might be a larger magma source available for a future eruption."

Images of the magma reservoir were made based on the strength of Earth's gravity at various points in Yellowstone. Hot and molten rock is less dense than cold rock, so the tug of gravity is measurably lower above magma reservoirs.

The Yellowstone caldera, like other calderas on Earth, huffs upward and puffs downward repeatedly over the ages, usually without erupting. Since 2004, the caldera floor has risen 3 inches per year, suggesting recharge of the magma body beneath it.


How to View a Plume

Seismic imaging uses earthquake waves that travel through the Earth and are recorded by seismometers. Waves travel more slowly through hotter rock and more quickly in cooler rock. Just as X-rays are combined to make CT-scan images of features in the human body, seismic wave data are melded to produce images of Earth's interior.

The study, the Yellowstone Geodynamics Project, was conducted during 1999-2005. It used an average of 160 temporary and permanent seismic stations -- and as many as 200 -- to detect waves from some 800 earthquakes, with the stations spaced 10 miles to 22 miles apart -- closer than other networks and better able to "see" underground. Some 160 Global Positioning System stations measured crustal movements.

By integrating seismic and GPS data, "it's like a lens that made the upper 125 miles much clearer and allowed us to see deeper, down to 410 miles," Smith says.

The study also shows warm rock -- not as hot as the plume -- stretching from Yellowstone southwest under the Snake River Plain, at depths of 20 miles to 60 miles. The rock is still warm from eruptions before the hotspot reached Yellowstone.


A Plume Blowing in the 2-inch-per-year Mantle Wind


Seismic imaging shows a "slow" zone from the top of the plume, which is 50 miles deep, straight down to about 155 miles, but then as you travel down the plume, it tilts to the northwest as it dives to a depth of 410 miles, says Smith.

That is the base of the global transition zone -- from 250 miles to 410 miles deep -- that is the boundary between the upper and lower mantle -- the layers below Earth's crust.

At that depth, the plume is about 410 miles beneath the town of Wisdom, Mont., which is 150 miles west-northwest of Yellowstone, says Smith.

He says "it wouldn't surprise me" if the plume extends even deeper, perhaps originating from the core-mantle boundary some 1,800 miles deep.

Why doesn't the plume rise straight upward? "This plume material wants to come up vertically, it wants to buoyantly rise," says Smith. "But it gets caught in the 'wind' of the upper mantle flow, like smoke rising in a breeze." Except in this case, the "breeze" of slowly flowing upper mantle rock is moving horizontally 2 inches per year.

While the crustal plate moves southwest, the warm, underlying mantle slowly boils due to convection, with warm areas moving upward and cooler areas downward. Northwest of Yellowstone, this convection is such that the plume is "blown" east-southeast by mantle convection, so it angles upward toward Yellowstone.

Scientists have debated for years whether Yellowstone's volcanism is fed by a plume rising from deep in the Earth or by shallow churning in the upper mantle caused by movements of the overlying crust. Smith says the new study has produced the most detailed image of the Yellowstone plume yet published.

But a preliminary study by other researchers suggests Yellowstone's plume goes deeper than 410 miles, ballooning below that depth into a wider zone of hot rock that extends at least 620 miles deep.

The notion that a deep plume feeds Yellowstone got more support from a study published this month indicating that the Hawaiian hotspot -- which created the Hawaiian Islands -- is fed by a plume that extends downward at least 930 miles, tilting southeast.


A Common Source for Yellowstone and the Columbia River Basalts?

Based on how the Yellowstone plume slants now, Smith and colleagues projected on a map where the plume might have originated at depth when the hotspot was erupting at the Oregon-Idaho-Nevada border area from 17 million to almost 12 million years ago.

They saw overlap, between the zones within the Earth where eruptions originated near the Oregon-Idaho-Nevada border and where the famed Columbia River Basalt eruptions originated when they were most vigorous 17 million to 14 million years ago.

Their conclusion: the Yellowstone hotspot plume might have fed those gigantic lava eruptions, which covered much of eastern Oregon and eastern Washington state.

I argue it is the common source," Smith says. "It's neat stuff and it fits together."

Smith conducted the seismic study with six University of Utah present or former geophysicists -- former postdoctoral researchers Michael Jordan, of SINTEF Petroleum Research in Norway, and Stephan Husen, of the Swiss Federal Institute of Technology; postdoc Christine Puskas; Ph.D. student Jamie Farrell; and former Ph.D. students Gregory Waite, now at Michigan Technological University, and Wu-Lung Chang, of National Central University in Taiwan. Other co-authors were Bernhard Steinberger of the Geological Survey of Norway and Richard O'Connell of Harvard University.

Smith conducted the gravity study with former University of Utah graduate student Katrina DeNosaquo and Tony Lowry of Utah State University in Logan.

Story Source:
Adapted from materials provided by University of Utah.

Tuesday, December 8, 2009

Scientists want debate on animals with human genes

 A mouse that can speak? A monkey with Down's Syndrome? Dogs with human hands or feet? British scientists want to know if such experiments are acceptable, or if they go too far in the name of medical research.

To find out, Britain's Academy of Medical Sciences launched a study Tuesday to look at the use of animals containing human material in scientific research.

The work is expected to take at least a year, but its leaders hope it will help establish guidelines for scientists in Britain and around the world on how far the public is prepared to see them go in mixing human genes into animals to discover ways to fight human diseases.

"Do these constructs challenge our idea of what it is to be human?" said Martin Bobrow, a professor of medical genetics at Cambridge University and chair of a 14-member group looking into the issue.

"It is important that we consider these questions now so that appropriate boundaries are recognized and research is able to fulfill its potential."

Using human material in animals is not new. Scientists have already created rhesus macaque monkeys that have a human form of the Huntingdon's gene so they can investigate how the disease develops; and mice with livers made from human cells are being used to study the effects of new drugs.

But scientists say the technology to put ever greater amounts of human genetic material into animals is spreading quickly around the world -- raising the possibility that some scientists in some places may want to push boundaries.

"There is a whole raft of new scientific techniques that will make it not only easier but also more important to be able to do these cross-species experiments," Bobrow said.

A row erupted in Britain last year over new laws allowing the creation of human-animal embryos for experimentation.

The row drew interventions from religious groups, who said such experiments pervert the course of nature, and scientific leaders, who say they are vital to research cures for diseases. One Catholic cardinal branded such work "Frankenstein science."

Bobrow said he and his colleagues were keen to avoid such frenzied debate again and hoped that by acting now they would be able to inform discussion rather than react to it.

But they said the discussions over human-animal embryos, which involve putting human DNA into cells derived from animals to produce stem cells, were "only half the conversation" and did not look at animals altered with human cells.

"They really didn't deal ... with a much broader range of issues like how far is it reasonable to try to mimic important human traits in animals," Bobrow said. "There are problems there in terms of social acceptance."

Bobrow said there was a "sort of understanding" within the scientific community that "as you get close to 50/50 mix" of human and animal material, the boundaries are near, but he said laws were vague at best.

"Do most of us care if we make a mouse whose blood cells or liver are human? Probably not," he said. "But if it can speak? If it can think? Or if it is conscious in a human way? Then we're in a completely different ballpark."

(Editing by Robin Pomeroy)

Scientists say paper battery could be in the works

Ordinary paper could one day be used as a lightweight battery to power the devices that are now enabling the printed word to be eclipsed by e-mail, e-books and online news.

Scientists at Stanford University in California reported on Monday they have successfully turned paper coated with ink made of silver and carbon nanomaterials into a "paper battery" that holds promise for new types of lightweight, high-performance energy storage.

The same feature that helps ink adhere to paper allows it to hold onto the single-walled carbon nanotubes and silver nanowire films. Earlier research found that silicon nanowires could be used to make batteries 10 times as powerful as lithium-ion batteries now used to power devices such as laplop computers.

"Taking advantage of the mature paper technology, low cost, light and high-performance energy-storage are realized by using conductive paper as current collectors and electrodes," the scientists said in research published in the Proceedings of the National Academy of Sciences.

This type of battery could be useful in powering electric or hybrid vehicles, would make electronics lighter weight and longer lasting, and might even lead someday to paper electronics, the scientists said. Battery weight and life have been an obstacle to commercial viability of electric-powered cars and trucks.

"Society really needs a low-cost, high-performance energy storage device, such as batteries and simple supercapacitors," Stanford assistant professor of materials science and engineering and paper co-author Yi Cui said.

Cui said in an e-mail that in addition to being useful for portable electronics and wearable electronics, "Our paper supercapacitors can be used for all kinds of applications that require instant high power."

"Since our paper batteries and supercapacitors can be very low cost, they are also good for grid-connected energy storage," he said.

Peidong Yang, professor of chemistry at the University of California-Berkeley, said the technology could be commercialized within a short time.

(Writing by Jackie Frank; Editing by Cynthia Osterman)

Microbes In Mud Flats Clean Up Oil Spill Chemicals


Micro-organisms occurring naturally in coastal mudflats have an essential role to play in cleaning up pollution by breaking down petrochemical residues.

Research by Dr Efe Aganbi and colleagues from the University of Essex, presented at the Society for General Microbiology's meeting at Harrogate March 30, reveals essential differences in the speed of degradation of the chemicals depending on whether or not oxygen is present.

In aerobic conditions (where oxygen is present), benzene, toluene and naphthalene, which all occur in petroleum, were rapidly degraded by microbes. In the absence of oxygen degradation was slower and only toluene was significantly broken down. This means that in a healthy marine ecosystem where the water is oxygenated, petrochemical contamination can biodegraded by micro-organisms, but if the oxygen supply is depleted by pollution and other processes leading to the breakdown of organic matter in the soil, the contamination will persist.

While almost all known aromatic hydrocarbons (the petroleum breakdown products) are degraded with oxygen only a few can be completely broken down in the absence of oxygen. However, in a contaminated environment oxygen is quickly depleted and anaerobic breakdown (without oxygen) becomes an important mechanism for getting rid of contaminants

The scientists also investigated the impact of the three chemicals on the make-up of different estuarine microbial communities. Over time the types of micro-organisms changed as the compounds were degraded. In aerobic conditions, benzene and toluene did not appear to affect community structure but naphthalene stimulated the growth of Cycloclasticus spirillensus, a bacterium known to break down oil residues. These bacteria might be used as a natural way of cleaning up pollution.

"Our work shows that microbes are very versatile and can live on most types of chemicals" said Dr Aganbi, "More work is needed to identify bacteria in these mud sediments as little is known about the range of bacteria present. Estuaries are ideal locations for refineries and petrochemical facilities – it is essential that mudflats are preserved to provide a natural clean-up area for pollution."

Story Source:
Adapted from materials provided by Society for General Microbiology, via EurekAlert!, a service of AAAS.

Hot Microbes Cause Groundwater Cleanup Rethink


Schematic showing how biosparging enhances the microbial degradation of contaminants. (Credit: Willem van Aken / Courtesy of CSIRO Australia)

------------------------------------------------------------------------------------------------------------------------------------------

CSIRO researchers have discovered that micro-organisms that help break down contaminants under the soil can actually get too hot for their own good.

While investigating ways of cleaning up groundwater contamination, scientists examined how microbes break down contaminants under the soil’s surface and found that subsurface temperatures associated with microbial degradation can become too hot for the microbes to grow and consume the groundwater contaminants.

This can slow down the clean up of the groundwater and even continue the spread of contamination.

The new findings mean that researchers now have to rethink the way groundwater remediation systems are designed.

CSIRO Water for a Healthy Country Flagship scientist Mr Colin Johnston, who is based in Perth, Western Australia, said the researchers were investigating how temperatures below the soil’s surface could be used as an indicator of the microbial degradation process associated with biosparging.

Biosparging is a technique that injects air into polluted groundwater to enhance the degradation of contaminants.

The contaminants are ‘food’ to the microbes and the oxygen in the air helps the microbes unlock the energy in the food so that they metabolise and grow, consuming more contaminants and stopping the spread of the contamination.

“Observations of diesel fuel contamination showed that, at 3.5 metres below the ground surface, temperatures reached as high as 47 °C,” Mr Johnston said.

“This is close to the 52 °C maximum temperature tolerated by the community of micro-organisms that naturally live in the soil at this depth and within the range where the growth of the community was suppressed.”

The growth of the soil’s micro-organism community can also be helped by adding nutrients.

However computer modelling confirmed that any attempts to further increase degradation of the contamination through the addition of nutrients had the potential to raise temperatures above the maximum for growth.

“Although increasing the flow of air would reduce temperatures and overcome these limitations a fine balance needs to be struck as the injected air can generate hazardous vapours that overwhelm the micro-organisms leading to unwanted atmospheric emissions at the ground surface,” Mr Johnston said.

“This would be particularly so for highly volatile compounds such as gasoline.

“It appears that prudent manipulation of operating conditions and appropriate timing of nutrient addition may help limit temperature increases.”

Mr Johnston said further research was required to better understand the thermal properties in the subsurface as well as the seasonal effects of rainfall infiltration and surface soil heating.

Story Source:
Adapted from materials provided by CSIRO Australia.

Nanoparticles To Combat Polluted Groundwater

Nanotechnology - Cleaning Up Our Water
Chemical Engineers Call On Nanoparticles To Combat Polluted Groundwater

Chemical engineers created nanoparticles out of gold and palladium to break down pollutants in groundwater. Adding the particles to groundwater converts dangerous contaminants like trichloroethylene into non-toxic compounds.

He's just 37 years old, but he's already making a difference in the world! A young engineer is creating small solutions to big problems.

We've seen it in the movies -- polluted drinking water is a health and environmental concern. In fact, right now, 30 states need to clean up their groundwater. "They've been designated by the EPA as being highly contaminated, and they've got to do something about the contaminated water," Michael Wong, Ph.D., a chemical engineer at Rice University in Houston, told Ivanhoe.

Dr. Wong is one of Smithsonian Magazine's America's Young Innovators … and for good reason. He's trying to come up with a way to use nanoparticles to clean up our water. "Water is not just H2O. Water has all sorts of stuff in it and the stuff we don't want, those are the things that can really hurt you," Dr. Wong explains.

He's using nanoparticles made out of gold and palladium -- a metal related to platinum -- to get rid of chemicals. One of the most common pollutants in United States groundwater is trichloroethylene, or TCE, a solvent used to degrease metals. And it can cause cancer.

"Our idea was, let's go ahead and break it down -- break it down into something that's safer," Dr. Wong says. "Safer chemicals that won't hurt your body and hurt the animals and the fish and what not."

Wong uses nanoparticles -- ten thousand times smaller than a human hair -- and hydrogen to break TCE into something non-toxic. "We are going to pump water through this guy here and the water is being pumped from the bottom up," Dr. Wong explains.

Glass beads will help to hold the nanoparticles in place. "Then clean water comes out," Dr. Wong says. Dr. Wong plans to test it at military sites first -- then move onto industrial sites and dry cleaning businesses. "I'd like to see our reactor do a really good job of getting rid of some of the contaminants," Dr. Wong says. Possibly, making our water and environment cleaner in the future. Dr. Wong says his reactor will be more efficient and cost less than the carbon reactors being used now.

WHAT IS HAZARDOUS WASTE? In the U.S., hazardous waste is defined as any discarded solid or liquid that is highly corrosive, toxic, reactive enough to release toxic fumes, or easily ignited. It can include solvents, pesticides, and spilled chemicals -- including acids, ammonia, chlorine bleach and other industrial cleaning agents -- as well as most heavy metals. Long-term exposure to hazardous waste can lead to chronic respiratory diseases such as asthma, damaged liver and kidneys, or cancer. Poisoning and chemical burns can result from contact with even small amounts of toxic chemical waste. Even brief exposure can cause headaches, dizziness, and nausea.

WHERE THAT GLASS OF WATER COMES FROM: Drinking water can come from either ground water sources, via wells, or surface water sources, such as rivers, lakes and streams. Most U.S. water systems in small and rural areas use a ground water source, while large metropolitan areas tend to rely on surface water. Causes of contamination can range from agricultural runoff to improper use of household chemicals.

SECONDARY STANDARDS: Even if your tap water meets the EPA's basic requirement for safe drinking water, some people still object to the taste, smell or appearance of their water. These are aesthetic concerns, however, and therefore fall under the EPA's voluntary secondary standards. Some tap water is drinkable, but may be temporarily clouded because of air bubbles, or have a chlorine taste. A bleachy taste can be improved by letting the water stand exposed to the air for a while.

Earth More Sensitive to Carbon Dioxide Than Previously Thought


 The temperature response of the Earth (in degrees C) to an increase in atmospheric carbon dioxide from pre-industrial levels (280 parts per million by volume) to higher levels (400 parts per million by volume). (a) shows predicted global temperatures when processes that adjust on relatively short-term timescales (for example sea-ice, clouds, and water vapour) are included in the model (b) includes additional long-tem processes that adjust on relatively long timescales (vegetation and land-ice). (Credit: Image courtesy of University of Bristol)
----------------------------------------------------------------------------------------------------------------------------------------In the long term, the Earth's temperature may be 30-50% more sensitive to atmospheric carbon dioxide than has previously been estimated, reports a new study published in Nature Geoscience.

The results show that components of the Earth's climate system that vary over long timescales -- such as land-ice and vegetation -- have an important effect on this temperature sensitivity, but these factors are often neglected in current climate models.

Dan Lunt, from the University of Bristol, and colleagues compared results from a global climate model to temperature reconstructions of the Earth's environment three million years ago when global temperatures and carbon dioxide concentrations were relatively high. The temperature reconstructions were derived using data from three million-year-old sediments on the ocean floor.

Lunt said, "We found that, given the concentrations of carbon dioxide prevailing three million years ago, the model originally predicted a significantly smaller temperature increase than that indicated by the reconstructions. This led us to review what was missing from the model."

The authors demonstrate that the increased temperatures indicated by the reconstructions can be explained if factors that vary over long timescales, such as land-ice and vegetation, are included in the model. This is primarily because changes in vegetation and ice lead to more sunlight being absorbed, which in turn increases warming.

Including these long-term processes in the model resulted in an increased temperature response of the Earth to carbon dioxide, indicating that the Earth's temperature is more sensitive to carbon dioxide than previously recognised. Climate models used by bodies such as the Intergovernmental Panel on Climate Change often do not fully include these long-term processes, thus these models do not entirely represent the sensitivity of the Earth's temperature to carbon dioxide.

Alan Haywood, a co-author on the study from the University of Leeds, said "If we want to avoid dangerous climate change, this high sensitivity of the Earth to carbon dioxide should be taken into account when defining targets for the long-term stabilisation of atmospheric greenhouse-gas concentrations."

Lunt added: "This study has shown that studying past climates can provide important insights into how the Earth might change in the future."

(a) shows predicted global temperatures when processes that adjust on relatively short-term timescales (for example sea-ice, clouds, and water vapour) are included in the model

(b) includes additional long-tem processes that adjust on relatively long timescales (vegetation and land-ice).

This research was funded by the Research Council UK and the British Antarctic Survey.

Story Source:
Adapted from materials provided by University of Bristol.

Journal Reference:
Daniel J. Lunt, Alan M. Haywood, Gavin A. Schmidt, Ulrich Salzmann, Paul J. Valdes and Harry J. Dowsett. Earth system sensitivity inferred from Pliocene modelling and data. Nature Geoscience, 6 December 2009

Brain Waves Can 'Write' on a Computer

Brain Waves Can 'Write' on a Computer in Early Tests, Researchers Show

Neuroscientists at the Mayo Clinic campus in Jacksonville, Fla., have demonstrated how brain waves can be used to type alphanumerical characters on a computer screen. By merely focusing on the "q" in a matrix of letters, for example, that "q" appears on the monitor.

Researchers say these findings, presented at the 2009 annual meeting of the American Epilepsy Society, represent concrete progress toward a mind-machine interface that may, one day, help people with a variety of disorders control devices, such as prosthetic arms and legs. These disorders include Lou Gehrig's disease and spinal cord injuries, among many others.

"Over 2 million people in the United States may benefit from assistive devices controlled by a brain-computer interface," says the study's lead investigator, neurologist Jerry Shih, M.D. "This study constitutes a baby step on the road toward that future, but it represents tangible progress in using brain waves to do certain tasks."

Dr. Shih and other Mayo Clinic researchers worked with Dean Krusienski, Ph.D., from the University of North Florida on this study, which was conducted in two patients with epilepsy. These patients were already being monitored for seizure activity using electrocorticography (ECoG), in which electrodes are placed directly on the surface of the brain to record electrical activity produced by the firing of nerve cells. This kind of procedure requires a craniotomy, a surgical incision into the skull.

Dr. Shih wanted to study a mind-machine interface in these patients because he hypothesized that feedback from electrodes placed directly on the brain would be much more specific than data collected from electroencephalography (EEG), in which electrodes are placed on the scalp. Most studies of mind-machine interaction have occurred with EEG, Dr. Shih says.

"There is a big difference in the quality of information you get from ECoG compared to EEG. The scalp and bony skull diffuses and distorts the signal, rather like how the Earth's atmosphere blurs the light from stars," he says. "That's why progress to date on developing these kind of mind interfaces has been slow."

Because these patients already had ECoG electrodes implanted in their brains to find the area where seizures originated, the researchers could test their fledgling brain-computer interface.

In the study, the two patients sat in front of a monitor that was hooked to a computer running the researchers' software, which was designed to interpret electrical signals coming from the electrodes.

The patients were asked to look at the screen, which contained a 6-by-6 matrix with a single alphanumeric character inside each square. Every time the square with a certain letter flashed, and the patient focused on it, the computer recorded the brain's response to the flashing letter. The patients were then asked to focus on specific letters, and the computer software recorded the information. The computer then calibrated the system with the individual patient's specific brain wave, and when the patient then focused on a letter, the letter appeared on the screen.

"We were able to consistently predict the desired letters for our patients at or near 100 percent accuracy," Dr. Shih says. "While this is comparable to other researchers' results with EEGs, this approach is more localized and can potentially provide a faster communication rate. Our goal is to find a way to effectively and consistently use a patient's brain waves to perform certain tasks."

Once the technique is perfected, its use will require patients to have a craniotomy, although it isn't yet known how many electrodes would have to be implanted. And software would have to calibrate each person's brain waves to the action that is desired, such as movement of a prosthetic arm, Dr. Shih says. "These patients would have to use a computer to interpret their brain waves, but these devices are getting so small, there is a possibility that they could be implanted at some point," he says.

"We find our progress so far to be very encouraging," he says.

The study, which is funded by the National Science Foundation, is ongoing.

Story Source:
Adapted from materials provided by Mayo Clinic.