Powered By Blogger

Monday, June 05, 2006

How are Science Journals Like Google?

The answer: They, like pages cited by Google, try to improve their ranking by increasing the number of papers that cite that journal.

Dr. West, the Distinguished Professor of Medicine and Physiology at the University of California, San Diego, School of Medicine, is one of the world's leading authorities on respiratory physiology and was a member of Sir Edmund Hillary's 1960 expedition to the Himalayas. After he submitted a paper on the design of the human lung to the American Journal of Respiratory and Critical Care Medicine, an editor emailed him that the paper was basically fine. There was just one thing: Dr. West should cite more studies that had appeared in the respiratory journal.

If that seems like a surprising request, in the world of scientific publishing it no longer is. Scientists and editors say scientific journals increasingly are manipulating rankings -- called "impact factors" -- that are based on how often papers they publish are cited by other researchers.

"I was appalled," says Dr. West of the request. "This was a clear abuse of the system because they were trying to rig their impact factor."

Just as television shows have Nielsen ratings and colleges have the U.S. News rankings, science journals have impact factors. Now there is mounting concern that attempts to manipulate impact factors are harming scientific research.


So, here's a journal that wants to up its score; the editor tells the author to cite more references from that journal. That's the same as increasing a page rank from Google by having more links in the page. Granted the comparison is a little weak because Google increases the score for a page based on pages that point to that page. Nonetheless, it's not too much of a stretch to make the comparison.

Why do this? From the article:

Impact factors are calculated annually for some 5,900 science journals by Thomson Scientific, part of the Thomson Corp., of Stamford, Conn. Numbers less than 2 are considered low. Top journals, such as the Journal of the American Medical Association, score in the double digits. Researchers and editors say manipulating the score is more common among smaller, newer journals, which struggle for visibility against more established rivals.

Thomson Scientific is set to release the latest impact factors this month. Thomson has long advocated that journal editors respect the integrity of the rankings. "The energy that's put into efforts to game the system would be better spent publishing excellent papers," says Jim Testa, director of editorial development at the company.

Impact factors matter to publishers' bottom lines because librarians rely on them to make purchasing decisions. Annual subscriptions to some journals can cost upwards of $10,000.

The result, says Martin Frank, executive director of the American Physiological Society, which publishes 14 journals, is that "we have become whores to the impact factor." He adds that his society doesn't engage in these practices.

What's the impact and future of these "impact factors?"

Scientists and publishers worry that the cult of the impact factor is skewing the direction of research. One concern, says Mary Ann Liebert, president and chief executive of her publishing company, is that scientists may jump on research bandwagons, because journals prefer popular, mainstream topics, and eschew less-popular approaches for fear that only a lesser-tier journal will take their papers. When scientists are discouraged from pursuing unpopular ideas, finding the correct explanation of a phenomenon or a disease takes longer.

"If you look at journals that have a high impact factor, they tend to be trendy," says immunologist David Woodland of the nonprofit Trudeau Institute, of Saranac Lake, N.Y., and the incoming editor of Viral Immunology. He recalls one journal that accepted immunology papers only if they focused on the development of thymus cells, a once-hot topic. "It's hard to get into them if you're ahead of the curve."

As examples of that, Ms. Liebert cites early research on AIDS, gene therapy and psychopharmacology, all of which had trouble finding homes in established journals. "How much that relates to impact factor is hard to know," she says. "But editors and publishers both know that papers related to cutting-edge and perhaps obscure research are not going to be highly cited."

Another concern is that impact factors, since they measure only how many times other scientists cite a paper, say nothing about whether journals publish studies that lead to something useful. As a result, there is pressure to publish studies that appeal to an academic audience oriented toward basic research.

Journals' "questionable" steps to raise their impact factors "affect the public," Ms. Liebert says. "Ultimately, funding is allocated to scientists and topics perceived to be of the greatest importance. If impact factor is being manipulated, then scientists and studies that seem important will be funded perhaps at the expense of those that seem less important."

This makes me wonder just how much science and research is done for publicity and for simple ratings. Like television shows that pander to ratings and show audience what sells but not nececsarily what's important, journals now look at what's popular and not what's needed. This is one more data point showing how science is degrading itself and doing no one any favors.

No comments: