How our body's defences aid computers in distress
Did humans and chimps once interbreed?
Extrasolar-planet hunters find triple-Neptune system
A blog for topics in science, engineering, math, and physics. I seek to comment and reference events, sites, and articles that are of interest to people of varied backgrounds.
A man died of H5N1 flu in Beijing in November 2003 - two full years before China admitted any human cases of H5N1. The death of the 24-year-old from bird flu came months before China even admitted H5N1 was circulating in its poultry. The man was tested for respiratory illness because of concern in the wake of the SARS epidemic.
It is not clear when the Chinese scientists who reported the finding discovered this, but they tried to withdraw their paper from the New England Journal of Medicine at the last minute on Wednesday. It was too late to prevent publication.
The case suggests that, as has long been suspected, many more people have caught H5N1 flu in China than have been reported, and for a longer time. The more human cases there are, the more chances the virus has to evolve into a human pandemic strain of flu.
"It's a very important issue that needs to be clarified urgently,'' Roy Wadia, a spokesman for the World Health Organization, said on Thursday in Beijing. "It raises questions as to how many other cases may not have been found at the time or may have been found retrospectively in testing."
Genesis spent 27 months in space, collecting solar wind ions thought to reflect the composition of the early solar system. But on 8 September 2004, a capsule containing the precious ions failed to release its parachutes and crashed into the Utah desert, destroying much of its contents.
[I]n a detailed report released on Tuesday, the board has confirmed the backwards design as the main cause of the crash. The problem originally stemmed from the fact that the sensor design was copied from NASA's comet-dust collecting Stardust mission, which began development about a year ahead of work on Genesis, says board chair Michael Ryschkewitsch, director of NASA's Applied Engineering and Technology Directorate.
But Genesis used additional electronic components, so it was forced to use two electronics boxes rather than the single one used by Stardust. In the process of making that change, "the person doing the packaging lost track of the [sensor] orientation", Ryschkewitsch told New Scientist.
So, the engineer installed the sensors but did so backwards. Well, that happens, but what about testing the systems? Surely, tests would find this error.
The mistake was never caught because the sensors were never put into a centrifuge and tested, as originally planned. Instead, an electrical engineer – not trained in reviewing complex mechanical drawings – compared drawings of the Stardust and Genesis sensors, and incorrectly concluded the designs were the same.
I spent years at the Navy's Operational Test and Evaluation Force and we had a mantra: End to end testing in the operational environment. Now, the truth is that that rarely happened. Frequently, testing was the last step in the devlopment of a system and by that time there was little money to pay for testing. So, like what happened here, the Navy did a similar thing. This episode with NASA shows what can happen when testing is short changed. Not a pretty site.
More information is here.
From ScienceDaily:
Now [...] researchers in Carnegie Mellon University's School of Computer Science have found a way to help computers understand the geometric context of outdoor scenes and thus better comprehend what they see. The discovery promises to revive an area of computer vision research all but abandoned two decades ago because it seemed insoluble. It may ultimately find application in vision systems used to guide robotic vehicles, monitor security cameras and archive photos.
Using machine learning techniques, Robotics Institute researchers Alexei Efros and Martial Hebert, along with graduate student Derek Hoiem, have taught computers how to spot the visual cues that differentiate between vertical surfaces and horizontal surfaces in photographs of outdoor scenes. They've even developed a program that allows the computer to automatically generate 3-D reconstructions of scenes based on a single image.
This is a tremendous advance. On the surface, no pun intended, this reads like a small step in computers somehow telling if edges run vertically or horizontally. In actuality, as I see this (pun intended) , it is a way for a computer to interpret what it sees in a very human-like way.
Let me explain.
When we see something, we don't just see it as a computer might see an image. When a computer sees an image, it merely computes the picture samples (called pixels) within its field of view. Pixels are just numbers put in a certain order, like a rectangle. That's all.
When humans see a scene or image, we see the pixels with our eyes and we also understand what we see to interpret the image in a familiar context. It is this last part that is vital to humans and now to computers. To see what I'm talking about, recall that you can tell the distance to a building simply by looking. In truth, there's no real way for you to know that distance. You can't actually tell because usually you don't know the height of the building.
The way you intuit the distance is through your understanding and early knowledge of the usual sizes of buildings (say, by the number of rows of windows that tell you how many floors there are). From this information you get an idea for the distance to that building. That is, your mind allows you to tell distance based on image appreciation from earlier experiences.
This step with computers being capable of similar reasoning and learning is truly amazing and one more step to computers seeing as we see.
Frink: It should be obvious to even the most dimwitted individual who holds an advanced degree in hyperbolic topology that Homer Simpson has stumbled into the third dimension. . . . (drawing on a blackboard) Here is an ordinary square.
Wiggum: Whoa, whoa—slow down, egghead!
Frink: But suppose we extend the square beyond the two dimensions of our universe, along the hypothetical z-axis, there. This forms a three-dimensional object known as a "cube," or "Frinkahedron" in honor of its discoverer.
"One of the themes we've harped on is Professor Frink trying to seize credit for something," Keeler says. "That should be very familiar to people in academia."
Just terrific.
Here's a link for the anwer to final question in the Science News article.
"Researchers at the Max Planck Institute for Plasma Physics and the Humboldt University, both in Berlin, have used underwater electrical discharges to generate luminous plasma clouds resembling ball lightning that last for nearly half a second and are up to 20 centimetres across.
They hope that these artificial entities will help them understand the bizarre phenomenon and perhaps even provide insights into the hot plasmas needed for fusion power plants.
You can watch a super-slow-motion video of the ball lightning here (3.7MB AVI)."
Article two:
"What happened to Titan's craters? NASA's Cassini mission should have seen hundreds of impact craters on Saturn's giant moon, but so far it has only spotted a handful.
The latest clues in the mystery of the missing craters suggest a conspiracy between volcanoes, rain and settling soot - perhaps aided by an eggshell-thin crust.
Cassini has aimed its radar at Titan five times, mapping five narrow strips of terrain. In a paper published in Nature, the radar team analyse the second strip in detail."
Dr. West, the Distinguished Professor of Medicine and Physiology at the University of California, San Diego, School of Medicine, is one of the world's leading authorities on respiratory physiology and was a member of Sir Edmund Hillary's 1960 expedition to the Himalayas. After he submitted a paper on the design of the human lung to the American Journal of Respiratory and Critical Care Medicine, an editor emailed him that the paper was basically fine. There was just one thing: Dr. West should cite more studies that had appeared in the respiratory journal.
If that seems like a surprising request, in the world of scientific publishing it no longer is. Scientists and editors say scientific journals increasingly are manipulating rankings -- called "impact factors" -- that are based on how often papers they publish are cited by other researchers.
"I was appalled," says Dr. West of the request. "This was a clear abuse of the system because they were trying to rig their impact factor."
Just as television shows have Nielsen ratings and colleges have the U.S. News rankings, science journals have impact factors. Now there is mounting concern that attempts to manipulate impact factors are harming scientific research.
So, here's a journal that wants to up its score; the editor tells the author to cite more references from that journal. That's the same as increasing a page rank from Google by having more links in the page. Granted the comparison is a little weak because Google increases the score for a page based on pages that point to that page. Nonetheless, it's not too much of a stretch to make the comparison.
Why do this? From the article:
Impact factors are calculated annually for some 5,900 science journals by Thomson Scientific, part of the Thomson Corp., of Stamford, Conn. Numbers less than 2 are considered low. Top journals, such as the Journal of the American Medical Association, score in the double digits. Researchers and editors say manipulating the score is more common among smaller, newer journals, which struggle for visibility against more established rivals.
Thomson Scientific is set to release the latest impact factors this month. Thomson has long advocated that journal editors respect the integrity of the rankings. "The energy that's put into efforts to game the system would be better spent publishing excellent papers," says Jim Testa, director of editorial development at the company.
Impact factors matter to publishers' bottom lines because librarians rely on them to make purchasing decisions. Annual subscriptions to some journals can cost upwards of $10,000.
The result, says Martin Frank, executive director of the American Physiological Society, which publishes 14 journals, is that "we have become whores to the impact factor." He adds that his society doesn't engage in these practices.What's the impact and future of these "impact factors?"
This makes me wonder just how much science and research is done for publicity and for simple ratings. Like television shows that pander to ratings and show audience what sells but not nececsarily what's important, journals now look at what's popular and not what's needed. This is one more data point showing how science is degrading itself and doing no one any favors.Scientists and publishers worry that the cult of the impact factor is skewing the direction of research. One concern, says Mary Ann Liebert, president and chief executive of her publishing company, is that scientists may jump on research bandwagons, because journals prefer popular, mainstream topics, and eschew less-popular approaches for fear that only a lesser-tier journal will take their papers. When scientists are discouraged from pursuing unpopular ideas, finding the correct explanation of a phenomenon or a disease takes longer.
"If you look at journals that have a high impact factor, they tend to be trendy," says immunologist David Woodland of the nonprofit Trudeau Institute, of Saranac Lake, N.Y., and the incoming editor of Viral Immunology. He recalls one journal that accepted immunology papers only if they focused on the development of thymus cells, a once-hot topic. "It's hard to get into them if you're ahead of the curve."
As examples of that, Ms. Liebert cites early research on AIDS, gene therapy and psychopharmacology, all of which had trouble finding homes in established journals. "How much that relates to impact factor is hard to know," she says. "But editors and publishers both know that papers related to cutting-edge and perhaps obscure research are not going to be highly cited."
Another concern is that impact factors, since they measure only how many times other scientists cite a paper, say nothing about whether journals publish studies that lead to something useful. As a result, there is pressure to publish studies that appeal to an academic audience oriented toward basic research.
Journals' "questionable" steps to raise their impact factors "affect the public," Ms. Liebert says. "Ultimately, funding is allocated to scientists and topics perceived to be of the greatest importance. If impact factor is being manipulated, then scientists and studies that seem important will be funded perhaps at the expense of those that seem less important."