The U.S. spends more than $400 billion a year (that’s 2.7 percent of 2011’s GDP) on scientific research — the most worldwide. The result? A mountain of data that’s bigger than any single researcher could possibly go through.
At the recent Conference on Knowledge Discovery and Data Mining, held in New York City, researchers pointed out that we’re much better at generating new data than we are at analyzing existing data. Which begs the question, how many pieces of vital data are buried in the avalanche of information produced by decades of experimentation? How many key discoveries are we missing out on?
Case in point: In two hours, KnIT read 100,000 research papers looking for information on p53, a protein believed to help suppress tumors, and kinases, the enzymes that work with it.
“Having analysed papers up until 2003, KnIT identified seven of the nine kinases discovered over the subsequent 10 years. More importantly, it also found what appeared to be two p53 kinases unknown to science,” Dr. Olivier Lichtarge, professor at Baylor College of Medicine in Houston, tells New Scientist.
The Baylor team hopes not only to accelerate the discovery of kinases and new cancer treatments, but that this technology might someday produce algorithms tailored to an individual person and genetically identify cures for what ails them.
At Carnegie Mellon University in Pittsburgh, Natasa Miskov-Zivanov is using KnIT to speed up pharmaceutical testing. With funding from the Defense Advanced Research Project Agency, Miskov-Zivanov’s computer cell models build themselves — a process which normally takes years and input from various sources.
Meta-research is only in its infancy, but already, it’s made some significant scientific discoveries. Only time will tell just how far this astonishing new technology will take us — and the improvements to the lives of Americans that it has the potential to bring about.