Студопедия
Случайная страница | ТОМ-1 | ТОМ-2 | ТОМ-3
АрхитектураБиологияГеографияДругоеИностранные языки
ИнформатикаИсторияКультураЛитератураМатематика
МедицинаМеханикаОбразованиеОхрана трудаПедагогика
ПолитикаПравоПрограммированиеПсихологияРелигия
СоциологияСпортСтроительствоФизикаФилософия
ФинансыХимияЭкологияЭкономикаЭлектроника

Text 1 Rethinking the Science System

Classroom Note taking | Experimentation and hypothesizing | Certainty and science | Science policy | Philosophical critiques | Text 3 First general-purpose computers | Text 6 Higher-level languages and program design | Discussion Questions | Suggested Readings | QUANTUM COMPUTERS |


Читайте также:
  1. A diverse educational system: structure, standards, and challenges
  2. A tetrahedron-based system of space co-ordinates
  3. A Typical State Judicial System
  4. A) Biological Sciences
  5. ABO Blood Grouping System
  6. ACCOUNTING AS A SOCIAL SCIENCE
  7. ACCOUNTING AS AN INFORMATION SYSTEM
1. Alan I. Leshner is the chief executive officer of the American Association for the Advancement of Science and executive publisher of Science.

As the U.S. budget environment for science and technology (S&T) threatens to get worse, it is essential for the scientific community to go beyond just advocating for special consideration. There is a strong case for maintaining investments in S&T as a foundation for long-term economic growth and social well-being. But when resources are constrained, it is essential that they be used effectively and efficiently to avoid losing scientific momentum and to ensure that society will benefit maximally from S&T's potential. The scientific community cannot afford to simply adapt passively to reduced budgets. The impact of impending cuts can be at least partially mitigated by some fundamental rethinking of the ways in which S&T are both funded and conducted. Although the United States is used as the example here, the same issues will apply in many other parts of the world.

Scientists have found a recipe for cooking the solar system from scratch: take a cold cloud of gas, and set it 15 light-years from an exploding supernova. Stun the cloud with the supernova’s shockwave. Incubate, and watch as the solar system begins to take shape.

New computer simulations support this scenario, which is a plausible recounting of the solar system’s birth, reports a team of scientists in an upcoming issue of the Astrophysical Journal. “With the supernova, you have one triggering event, and you don’t have to invoke a complicated chain of events,” says study author Matthias Gritschneder, an astrophysicist at the University of California, Santa Cruz.

Understanding how the local solar neighborhood grew up is crucial for learning how other planetary systems are born.

Scientists think the sun and surrounding planets were born from a churning disk of gas and dust, but what precisely caused the stuff to condense and form these bodies has been a mystery. Some clues appear in radioactive elements that were injected into and swam around the presolar cloud. Today, they are embedded in objects such as asteroids, and are thought to mark the first solid bodies that emerged after the cloud’s collapse.

One of these elements, aluminum-26, has helped scientists determine that the solar system was born a little more than 4.5 billion years ago. But the aluminum-26 also presents a puzzle: All of it appears to have enriched the cloud within roughly 20,000 years, much faster than most simulations can explain.

Gritschneder and his colleagues think the nearby supernova solves the aluminum-26 puzzle. In their version of events, all the aluminum-26 would have been incorporated within 18,000 years of the shockwave’s collision, which quickly collapsed the cloud and infused it with the radioactive element. The team ruled out other potential solutions, such as solar wind from a nearby star or enrichment occurring from within the cold cloud itself, because the key elements would have been delivered too slowly or in the wrong quantities. “You have to come up with something creative to make it happen fast enough,” Gritschneder says. “We are confident that the clump can sit there, get hit by a supernova, and get enriched quickly.”

Alan Boss, a theoretical astrophysicist at the Carnegie Institution for Science in Washington, D.C., has had the same idea. Boss approached the problem differently, by calculating in three dimensions rather than two, but also concluded that shocking the embryonic solar system would simultaneously trigger the cloud’s collapse and quickly inject the required radioactive elements. “The basic results are the same for both of us, which is a relief,” says Boss, who presented his work on November 8 at the Formation of the First Solids in the Solar System workshop in Kauai, Hawaii.

Getting the same results using different methods supports the supernova shockwave theory, says planetary scientist Fred Ciesla of the University of Chicago. He questions whether scientists have interpreted the 20,000-year time span correctly and points to unresolved issues raised by other radioactive elements, such as iron-60. But even so, Ciesla says, he favors the supernova shock theory over other hypotheses.

“Work like this says something about how stars and planets formed, and whether it’s consistent with the data we have,” Ciesla says. “Once we’ve been able to accumulate enough information, we can start to speculate about how frequently this works in other places in the galaxy.”

 

 

Text 2 Future wars may be fought by synapses

Instead of the indiscriminate destruction of the atom bomb or napalm, the signature weapon of future wars may be precise, unprecedented control over the human brain. As global conflicts become murkier, technologies based on infiltrating brains may soon enter countries’ arsenals, neuroethicists claim in a paper published online October 31 in Synesis. Such “neuroweapons” have the capacity to profoundly change the way war is fought.

Advances in understanding the brain’s inner workings could lead to a pill that makes prisoners talk, deadly toxins that can shut down brain function in minutes, or supersoldiers who rely on brain chips to quickly lock in on an enemy’s location.

The breadth of brain-based technologies is wide, and includes the traditional psychological tactics used in earlier wars. But the capacity of the emerging technologies is vastly wider — and may make it possible to coerce enemy minds with exquisite precision.

In the paper, neuroscientists James Giordano of the Potomac Institute for Policy Studies in Arlington, Va., and Rachel Wurzman of Georgetown University Medical Center in Washington, D.C., describe emerging brain technologies and argue that the United States must be proactive in neuroscience-based research that could be used for national intelligence and security.

“A number of these different approaches are heating up in the crucible of possibility, so that’s really increased some of the momentum and the potential of what this stuff can do,” Giordano says.

In the not-too-distant future, technologies called brain-machine interfaces could allow the combination of human brains with sophisticated computer programs. Analysts with a brain chip could quickly sift through huge amounts of intelligence data, and fighter pilots merged with computer search algorithms could rapidly lock onto an enemy target, for instance.

Neuroscience could also find its way into interrogation rooms: As scientists learn more about how the brain generates feelings of trust, drugs could be developed that inspire that emotion in prisoners and detainees. Oxytocin, a hormone produced by mothers’ bodies after childbirth, is one such candidate. Perhaps a whiff of oxytocin could dampen a person’s executive functions, turning an uncooperative detainee into a chatty friend.

Other sorts of psychopharmacological manipulation could be used to boost soldiers’ performance, allowing them to remain vigilant without sleep, heighten their perceptual powers and erase memories of their actions on the battlefield. Because neuroscientists are beginning to understand how the brain forms memories, it’s not inconceivable that a drug could be designed to prevent PTSD. Such technology could enable more sinister applications, though, such as creating soldiers who wouldn’t remember atrocities they committed or detainees who couldn’t recall their own torture.

Some of these abilities are more probable than others, says bioethicist Jonathan Moreno of the University of Pennsylvania in Philadelphia. Drugs exist that increase alertness, but so far no drug has clearly boosted brain function. “Honestly, there isn’t much, compared to caffeine or nicotine,” he says.

Giordano and Wurzman also describe drugs, microbial agents and toxins derived from nature that could harm enemy brains in a more traditional way. The list includes a neurotoxin from a shellfish that is water soluble, able to be aerosolized and causes death within minutes; a bacterium that can induce hallucinations, itchiness and strange tastes; and an amoebic microbe that crawls up the olfactory nerve to invade the brain, where it kills brain tissue.

“The article contains an arsenal of neuroweapons, and these raise lots of ethical and legal issues,” says bioethicist Jonathan Marks of Pennsylvania State University in University Park. “Any kind of drug that you administer for national security purposes raises profound questions.”

Some scientists have already committed to resisting the application of their research to what they consider illegal or immoral military purposes. “It’s not enough just to study the issue of ethics,” says Curtis Bell of Oregon Health & Science University in Portland. “The potential for misuse of this knowledge is so strong that the responsibility of neuroscience goes further than just studying.”

Bell has circulated a petition for neuroscientists, pledging signatories not to participate in developing technology that will be knowingly used for immoral or illegal purposes. “Neuroscientists should not provide tools for torture,” he says. So far, about 200 neuroscientists from 18 countries have signed, he says.

Ideally science would have no place in combat, Giordano acknowledges, but that view ignores reality. “On one hand, what you’d like to say is science and technology should never be used to do bad things,” says Giordano, who also holds positions at the University of New Mexico and the University of Oxford in England. “Yeah, and Santa Claus should come at Christmas and the Easter Bunny should come at Easter, and we should all live happily. History teaches us otherwise, so we have to be realistic about this.”

The United States military is investing in brain-related research, though it’s difficult to get a solid estimate of how much research is happening, Moreno says. The Defense Advanced Research Projects Agency, or DARPA, lists several neuroscience-related projects on its website, including “Accelerated Learning,” “Neurotechnology for Intelligence Analysts” and “Cognitive Technology Threat Warning System.”

“The fact of the matter is that we do live in a world in which there are people who would like to do bad things to us or our friends,” Moreno says. “Eventually, some of this stuff is going to be out there.”

Text 3 Hands off and on in schizophrenia

HAND-Y SWITCHSimultaneous stroking of a visible rubber hand (left) and the unseen actual hand (right) of this graduate student caused him to perceive the fake appendage as his own. Schizophrenia patients experience a rubber-hand illusion especially easily and intensely, denoting a disturbance in their sense of body and self.John Russell/Vanderbilt Univ.

People with schizophrenia rapidly and intensely perceive phony replicas of hands as their own, possibly contributing to this mental ailment’s signature hallucinations, a new study suggests.

In a series of tests, people with schizophrenia believed a rubber hand placed in front of them was theirs if the visible fake hand and the patient’s hidden, corresponding hand were simultaneously stroked with a paintbrush.

Mentally healthy people took longer to experience a less dramatic version of this rubber-hand illusion than schizophrenia patients did, but the effect’s vividness increased among healthy volunteers who reported magical beliefs, severe social anxiety and other characteristics linked to a tendency to psychosis, psychologist Sohee Park of Vanderbilt University in Nashville and her colleagues report online October 31 in PLoS ONE.

“Schizophrenia patients may have a more flexible internal representation of their bodies and a weakened sense of self,” Park says. “Even without psychosis, the rubber-hand illusion can be more pronounced in certain personality types.”

Mental health clinicians have written for several decades about a disturbed sense of self in schizophrenia. A team led by psychiatrist Avi Peled of Sha’ar Menashe Mental Health Center in Hadera, Israel, first reported a powerful rubber-hand illusion in the illness in 2000.

In further support of disturbed body perception in schizophrenia, Park — who directed the new study with graduate student Katharine Thakkar — notes that patients thought that their stationary, unseen hands moved an average of 2 centimeters closer to the rubber hand as they felt and watched brush strokes. Healthy participants reported a weaker version of this effect.

One patient, a 55-year-old man, felt that he floated above his own body and looked down on himself during the three-minute stroking procedure. In a follow-up session, this man had another out-of-body experience, in which he and the experimenter hovered above a lab table for several minutes.

An inability to perceive one’s body as one’s own in schizophrenia prompts heightened reactions to the sight of detached body parts, such as the rubber hand, proposes neurologist Peter Brugger of University Hospital Zurich in Switzerland. That would explain why simply looking at the fake appendage evoked a rubber-hand illusion in several patients in the new study, he says.

Park’s team studied 24 schizophrenia patients and 21 volunteers who had no mental disorders. Patients lived by themselves or in group homes and received antipsychotic medication and other services at an outpatient clinic.

A right-brain region previously linked to out-of-body experiences and representation of one’s body goes awry in schizophrenia, Park hypothesizes. Yoga or other body-awareness exercises may weaken this portion of schizophrenia’s grip, she suggests.

Peled proposes that a communication breakdown among sensory and association networks throughout the brain underlies schizophrenia. This can undermine a sense of body ownership, but patients more often hear tormenting voices and retreat from social life, he says.

Text 4 Prompt liver transplant boosts survival in heavy drinkers

Heavy drinkers who have severe liver inflammation are much more likely to survive if they get a prompt liver transplant than if they wait a few months, new research finds. Allowing some alcoholics with a potentially lethal form of liver disease to move up the waiting list for a transplant — a controversial area of transplant policy — would save lives, researchers suggest in the Nov. 10 New England Journal of Medicine.

About 10 to 15 percent of donor livers go to people whose liver disease stems directly from alcohol, says Robert Brown, a hepatologist at Columbia University in New York City who wasn’t involved in the new study. But the number is an estimate at best since many people who qualify for a transplant because of liver damage from other causes might also drink, he says.

In the United States, transplant guidelines require that alcoholics stay sober for six months before they can be placed on the waiting list for a liver transplant. Six months of abstinence and medication improve the health of many people with alcohol-related liver disease, but the delay can be fatal for those with a form of alcohol-induced liver inflammation that doesn’t respond to routine medication. About 70 to 80 percent of people with this condition, called severe alcoholic hepatitis, die within six months.

For the new study, physician Philippe Mathurin of the Claude Huriez Hospital in Lille, France, and his colleagues chose 26 patients to get a liver transplant within a few weeks of being diagnosed with alcoholic hepatitis. The patients had failed to respond to medication such as steroids, the typical treatment for the condition. The researchers also monitored 26 similar patients who didn’t receive transplants.

Six months after surgery, six of the 26 transplant patients had died, compared with 20 of the 26 who didn’t get a liver transplant.

These findings challenge the notion that transplant eligibility for all alcoholism-related liver patients must be linked to a prescribed abstinence period, Mathurin argues. But changing transplant guidelines takes years, he acknowledges, and the new data will need to be reproduced by other scientists. Ultimately, he says, patients on the waiting list for donor organs “need to be ranked by sickest-first and according to the severity of their disease, regardless of the cause.”

Underlying the U.S.’s six-months-sober policy is the assumption that alcohol-related liver disease is a self-inflicted problem, Brown says. “There is an inherent bias against alcohol as a reason for transplant.” He acknowledges that consuming alcohol is a deliberate act but notes that other factors that contribute to liver disease — obesity, smoking, sedentary lifestyle, hepatitis C — could also be seen as self-inflicted. In the study, three of the 26 transplant patients reported drinking at some point during the two years after the operation.

Surgeons have to make difficult assessments when determining how urgently a patient needs a transplant, Brown says, and the new data should be factored in. “When the likelihood of dying is 70 percent, the default has to be to transplant,” he says. Patients with severe alcoholic hepatitis who don’t respond to medication would constitute only a few percent of total liver transplants, he estimates.

The true prevalence of alcoholic hepatitis is unknown, but by some estimates it comprises 10 to 35 percent of all alcohol-related liver disease.

Text 5 How the moon got its magnetism

External forces beating up the ancient moon may explain how it once maintained a magnetic field for more than 400 million years — longer than scientists had thought such a small object could be magnetized.

Either wobbly rotation produced by Earth’s gravitational tug or asteroids smacking into the lunar surface may have triggered enough turbulence in the moon’s molten core to generate a long-lasting magnetic field, report two teams of scientists in the Nov. 10 Nature.

“This has been a very fundamental question for 40 years,” says study author Christina Dwyer, a graduate student at the University of California, Santa Cruz. Though absent today, this ancient field is recorded in rocks retrieved from the moon’s surface and in magnetized patches of crust spied by orbiting spacecraft. “The moon was magnetized. We don’t know how.”

Normally, heat escaping from a planet’s interior causes fluid in the core to slosh around, creating a magnetic field. But this explanation doesn’t work for the moon, which would have cooled off too quickly for the sloshing to be maintained. Instead, each research team points to a different spoon that would have mechanically stirred the early moon’s innards to create magnetism.

Dwyer’s team suggests that a slight, Earth-driven wobble in the moon’s axis of rotation mixed the liquid core. “The Earth is tugging on the moon and that tug — even though it’s quite small — keeps the moon wobbling,” says planetary scientist Francis Nimmo of UC Santa Cruz. The wobble is greater when the moon is closer to Earth. As the moon moves farther away — as it has, over the last four billion years or so — the wobble decreases. The magnetic field decreases, too, disappearing completely around 2.7 billion years ago, the team reports. If this model is right, says Nimmo, “younger [lunar] rocks ought to be magnetized, but they ought to be magnetized less strongly than older rocks.”

The second team proposes that large asteroids smacking into the moon messed up its rotation rate and perturbed the liquid core. Six large, ancient lunar impact basins with magnetized rocks at their centers support this idea, says study author and fluid dynamicist Michael Le Bars of the French national research agency and the University of Aix-Marseille. “A reasonably large impact can generate a magnetic field for about 10,000 years,” he says. If so, then lunar rocks might show spike after spike of magnetism as impacts pummeled the early moon.

Though different, the two theories are not mutually exclusive and together comprise “a major, major advance in our understanding,” says Benjamin Weiss, a planetary scientist at MIT. Studying the magnetic record preserved in lunar rocks would be a good way to test the theories, he notes, since tracing magnetic intensities over time should produce a pattern that might match one or both of the predictions. Right now, scientists are limited to working with the lunar rocks already on Earth — some of which have already revealed magnetic fields, though the measurements are decades old. “It would be nice if somebody goes back and redoes those measurements with the modern techniques,” Nimmo says.

Ian Garrick-Bethell, a planetary scientist at UC Santa Cruz, says both papers are “clever and elegant,” but that more mathematical simulations need to be done. “They’ve done their homework,” he says. “But some of the detailed mechanics may not entirely be understood.”


Дата добавления: 2015-11-14; просмотров: 60 | Нарушение авторских прав


<== предыдущая страница | следующая страница ==>
ARTIFICIAL INTELLIGENCE| Text 6 Two steps to primate social living

mybiblioteka.su - 2015-2024 год. (0.027 сек.)