The visual cortex in the human brain interprets visual input. A computer scientist from the University of Innsbruck has managed to simulate the workings of the visual cortex with high accuracy in a computational model.
Photo: Die Verarbeitung optischer Signale im visuellen Cortex hat ein Innsbrucker Wissenschaftler als Modell nachgebaut. (Foto: flickr.com/orangeacid)
The human brain is a remarkable organ: It integrates and computes all the information the human body perceives. The information processing properties of the brain are the main research subject of Computational Neuroscience, an inter-disciplinary branch of science targeted at studying and understanding the inner workings of the human brain. Different parts of the brain process different signals; the part responsible for seeing is the visual cortex. Antonio Rodríguez-Sánchez, a computer scientist at the University of Innsbruck, recently published a computational model which aims at answering the question about how neurons in the human brain interpret shapes and objects.
The visual cortex consists of millions of neurons. Modern understanding of this part of the brain dates back to the work of the neuroscientists Torsten N. Wiesel and David H. Hubel from 1962 onwards; both received the Nobel Prize in Medicine in 1981 for that work. "You can picture the neurons responsible for perceiving and interpreting objects as a pyramid," Antonio Rodríguez-Sánchez explains. Rodríguez-Sánchez has translated this hierarchy - lower levels are responsible for the interpretation of e.g. corners and edges, higher levels for recognizing whole objects - into a computational model, with mathematical equations replacing neurons.
The results of the model have been compared to existing data taken from medical research on the primate brain. The outcome is promising: "My model resulted in a 83% match, which is very high for a model of this kind," Rodríguez-Sánchez says. For practical use, that means that robots with near-human visual capabilities are no longer a matter of the far future.
Robots with these capabilities could, for example, help handicapped people: "One example is a project called Playbot, a project in my previous lab at York University in Canada: An advanced wheelchair which is equipped with sensors that recognize the direction in which a person is looking and which is able to initiate the appropriate actions", says Rodríguez-Sánchez. "The system recognizes objects - when you look at a door, the chair will close it or open it for you and drive you through it." Through the scientist’s new model this research could be advanced even further. Another possible use case for Rodríguez-Sánchez’ findings is medicine: ,,There are people whose eyes work but who cannot see due to a damaged visual cortex," he explains. For eyes, research is already contemplating and prototyping artificial retina implants. "Who knows, maybe we will even be able to replace damaged parts of the brain in the future. Current research already points in this direction."
Rodríguez-Sánchez, AJ, Tsotsos, JK. The roles of endstopped and curvature tuned computations in a hierarchical representation of 2D shape. PLoS ONE 7 (8), pp. 1-13, 2012.