The National Ecological Observatory Network, funded by Congress for $434 mil

游客2023-12-04  8

问题     The National Ecological Observatory Network, funded by Congress for $434 million, will equip 106 U. S. sites with sensors to gather ecological data all day, every day, for 30 years after it goes operational in 2017. The Human Brain Project, supported by $1.6 billion from the European Union, intends to create a supercomputer simulation of a working human brain, including all 86 billion neurons and 100 trillion synapses(神经键). The International Cancer Genome(基因组)Consortium, 74 research teams across 17 countries spending an estimated $ 1 billion, is compiling 25,000 tumor genome sequences from 50 types of cancers. Big data is big business in the life sciences, attracting lots of money and prestige. It’s relatively young: the move toward big data can be traced back to 1990, when researchers joined together to sequence all 3 billion letters in the human genome. That project concluded in 2003, and since then, the life sciences have become a data juggernaut(强大的破坏力), propelled forward by new sequencing and imaging technologies that accumulate data at astonishing speeds.
    But not all scientists think bigger is better. As of July 9, 2014, for example, more than 450 researchers had signed a public letter criticizing the Human Brain Project, citing a "significant risk" that the project will fail to meet its goal. One neuroscientist called the project " a waste of money," while another bluntly said the idea of simulating the human brain is downright "crazy". Other big data projects have also been criticized, especially for cost and lack of results. The core of recent criticisms against big data projects is the concern that expensive, massive data sets on biological phenomena—including the brain, the genome, the biosphere(生物圈)and more—won’t necessarily lead to scientific discoveries. " One of the problems with ideas about big data is the implicit notion that simply having lots of data will answer questions for you, and it won’t," says J. Anthony Movshon, a neuroscientist at New York University. Large data sets are useful only when combined with the right tools and theories to interpret them, he says, and those have largely been lacking in the life sciences. That’s one reason biological data is piling up far faster than it is being analyzed. "We have an inability to slow down and focus," says Kenneth Weiss, an evolutionary geneticist at Penn State University. "I wouldn’t say big data is bad, but it’s a fad, and we’re not learning a lesson from it. "
    Other areas of science, such as physics and astronomy, have a rich history of big data, as well as the organization and infrastructure to use that data. Take the Hubble Space Telescope, which has made 1 million observations, amounting to over 100 terabytes of data, since its launch in 1990. More than 10,000 scientific articles have been published using that data, including the discovery of dark energy and the age of the universe. Or consider the Large Hadron Collider, a particle accelerator that produces tens of terabytes of data each night. In 2012, that data confirmed the existence of the Higgs boson, also called the " God particle," among other high-profile discoveries in particle physics. The life sciences, on the other hand, have barreled(高速行进)ahead in data collection without the ability to determine which types of data are most useful, how to share it, or how to reproduce results. Research and funding institutions recognize this limitation, says Philip Bourne, associate director for data science at the National Institutes of Health, and the NIH is working to set aside funding and manpower to find ways to make data usable. Bourne is optimistic: " Making full use of very large amounts of data takes time, but I think it will come," he says.
    Bourne and others, like David Van Essen, lead investigator of the $40 million NIH-funded Human Connectome Project(HCP), believe that gathering data first and asking questions second is a new, exciting way to make discoveries about the natural world. The HCP, a consortium of 36 investigators at 11 institutions, is a big data effort to map the connections in the brain using high-resolution brain scans and behavioral information from 1,200 adults. According to the project’s website, the HCP data set will "reveal much about what makes us uniquely human and what makes every person different from all others. " On the other hand, there’s not a single hypothesis in sight. This is a fundamentally different way of doing science from hypothesis-driven experiments, the traditional bedrock of the scientific method, and many researchers have their doubts about it. " Science depends upon predictions being generated and those hypotheses being tested," says ecologist Robert Paine of the University of Washington. "Mega models won’t bring us to the promised land. " Others say the effort required to gather the data simply doesn’t warrant the price tag: " The idea that you should collect a lot of information because somewhere in this chaff(谷壳)is a little bit of wheat is a poor case for using a lot of money," says Movshon. [br] It can be inferred from the last paragraph that______.

选项 A、researches should be carried out in a standard pattern
B、some researchers don’t feel hypothesis is a priority
C、it’s hard to reach conclusions without a hypothesis
D、data-collection costs the largest portion of money in researches

答案 B

解析 推断题。由题干定位至最后一段。根据该段第四句可知,正在进行中的HCP项目目前还没有任何假定的理论需要证实,而第一句中也说,该项目的负责人认为,先收集数据、再提出问题也可以被视为一种新的研究方法,也就是说,部分科学家并不认为假定结论是进行科学研究的前提,故答案为[B]。
转载请注明原文地址:https://tihaiku.com/zcyy/3242727.html
最新回复(0)