Friday, May 13, 2011

New Algorithm Offers Ability to Influence Systems Such as Living Cells or Social Networks

However, an MIT researcher has come up with a new computational model that can analyze any type of complex network -- biological, social or electronic -- and reveal the critical points that can be used to control the entire system.

Potential applications of this work, which appears as the cover story in the May 12 issue ofNature, include reprogramming adult cells and identifying new drug targets, says study author Jean-Jacques Slotine, an MIT professor of mechanical engineering and brain and cognitive sciences.

Slotine and his co-authors applied their model to dozens of real-life networks, including cell-phone networks, social networks, the networks that control gene expression in cells and the neuronal network of the C. elegans worm. For each, they calculated the percentage of points that need to be controlled in order to gain control of the entire system.

For sparse networks such as gene regulatory networks, they found the number is high, around 80 percent. For dense networks -- such as neuronal networks -- it's more like 10 percent.

The paper, a collaboration with Albert-Laszlo Barabasi and Yang-Yu Liu of Northeastern University, builds on more than half a century of research in the field of control theory.

Control theory -- the study of how to govern the behavior of dynamic systems -- has guided the development of airplanes, robots, cars and electronics. The principles of control theory allow engineers to design feedback loops that monitor input and output of a system and adjust accordingly. One example is the cruise control system in a car.

However, while commonly used in engineering, control theory has been applied only intermittently to complex, self-assembling networks such as living cells or the Internet, Slotine says. Control research on large networks has been concerned mostly with questions of synchronization, he says.

In the past 10 years, researchers have learned a great deal about the organization of such networks, in particular their topology -- the patterns of connections between different points, or nodes, in the network. Slotine and his colleagues applied traditional control theory to these recent advances, devising a new model for controlling complex, self-assembling networks.

"The area of control of networks is a very important one, and although much work has been done in this area, there are a number of open problems of outstanding practical significance," says Adilson Motter, associate professor of physics at Northwestern University. The biggest contribution of the paper by Slotine and his colleagues is to identify the type of nodes that need to be targeted in order to control complex networks, says Motter, who was not involved with this research.

The researchers started by devising a new computer algorithm to determine how many nodes in a particular network need to be controlled in order to gain control of the entire network. (Examples of nodes include members of a social network, or single neurons in the brain.)

"The obvious answer is to put input to all of the nodes of the network, and you can, but that's a silly answer," Slotine says."The question is how to find a much smaller set of nodes that allows you to do that."

There are other algorithms that can answer this question, but most of them take far too long -- years, even. The new algorithm quickly tells you both how many points need to be controlled, and where those points -- known as"driver nodes" -- are located.

Next, the researchers figured out what determines the number of driver nodes, which is unique to each network. They found that the number depends on a property called"degree distribution," which describes the number of connections per node.

A higher average degree (meaning the points are densely connected) means fewer nodes are needed to control the entire network. Sparse networks, which have fewer connections, are more difficult to control, as are networks where the node degrees are highly variable.

In future work, Slotine and his collaborators plan to delve further into biological networks, such as those governing metabolism. Figuring out how bacterial metabolic networks are controlled could help biologists identify new targets for antibiotics by determining which points in the network are the most vulnerable.


Source

Saturday, May 7, 2011

Robot Engages Novice Computer Scientists

A product of CMU's famed Robotics Institute, Finch was designed specifically to make introductory computer science classes an engaging experience once again.

A white plastic, two-wheeled robot with bird-like features, Finch can quickly be programmed by a novice to say"Hello, World," or do a little dance, or make its beak glow blue in response to cold temperature or some other stimulus. But the simple look of the tabletop robot is deceptive. Based on four years of educational research sponsored by the National Science Foundation, Finch includes a number of features that could keep students busy for a semester or more thinking up new things to do with it.

"Students are more interested and more motivated when they can work with something interactive and create programs that operate in the real world," said Tom Lauwers, who earned his Ph.D. in robotics at CMU in 2010 and is now an instructor in the Robotics Institute's CREATE Lab."We packed Finch with sensors and mechanisms that engage the eyes, the ears -- as many senses as possible."

Lauwers has launched a startup company, BirdBrain Technologies, to produce Finch and now sells them online atwww.finchrobot.comfor$99 each.

"Our vision is to make Finch affordable enough that every student can have one to take home for assignments," said Lauwers, who developed the robot with Illah Nourbakhsh, associate professor of robotics and director of the CREATE Lab. Less than a foot long, Finch easily fits in a backpack and is rugged enough to survive being hauled around and occasionally dropped.

Finch includes temperature and light sensors, a three-axis accelerometer and a bump sensor. It has color-programmable LED lights, a beeper and speakers. With a pencil inserted in its tail, Finch can be used to draw pictures. It can be programmed to be a moving, noise-making alarm clock. It even has uses beyond a robot; its accelerometer enables it to be used as a 3-D mouse to control a computer display.

Robot kits suitable for students as young as 12 are commercially available, but often cost more than the Finch, Lauwers said. What's more, the idea is to use the robot to make computer programming lessons more interesting, not to use precious instructional time to first build a robot.

Finch is a plug-and-play device, so no drivers or other software must be installed beyond what is used in typical computer science courses. Finch connects with and receives power from the computer over a 15-foot USB cable, eliminating batteries and off-loading its computation to the computer. Support for a wide range of programming languages and environments is coming, including graphical languages appropriate for young students. Finch currently can be programmed with the Java and Python languages widely used by educators.

A number of assignments are available on the Finch Robot website to help teachers drop Finch into their lesson plans, and the website allows instructors to upload their own assignments or ideas in return for company-provided incentives. The robot has been classroom-tested at the Community College of Allegheny County, Pa., and by instructors in high school, university and after-school programs.

"Computer science now touches virtually every scientific discipline and is a critical part of most new technologies, yet U.S. universities saw declining enrollments in computer science through most of the past decade," Nourbakhsh said."If Finch can help motivate students to give computer science a try, we think many more students will realize that this is a field that they would enjoy exploring."


Source

Friday, May 6, 2011

Scientists Afflict Computers With 'Schizophrenia' to Better Understand the Human Brain

The researchers used a virtual computer model, or"neural network," to simulate the excessive release of dopamine in the brain. They found that the network recalled memories in a distinctly schizophrenic-like fashion.

Their results were published in April inBiological Psychiatry.

"The hypothesis is that dopamine encodes the importance-the salience-of experience," says Uli Grasemann, a graduate student in the Department of Computer Science at The University of Texas at Austin."When there's too much dopamine, it leads to exaggerated salience, and the brain ends up learning from things that it shouldn't be learning from."

The results bolster a hypothesis known in schizophrenia circles as the hyperlearning hypothesis, which posits that people suffering from schizophrenia have brains that lose the ability to forget or ignore as much as they normally would. Without forgetting, they lose the ability to extract what's meaningful out of the immensity of stimuli the brain encounters. They start making connections that aren't real, or drowning in a sea of so many connections they lose the ability to stitch together any kind of coherent story.

The neural network used by Grasemann and his adviser, Professor Risto Miikkulainen, is called DISCERN. Designed by Miikkulainen, DISCERN is able to learn natural language. In this study it was used to simulate what happens to language as the result of eight different types of neurological dysfunction. The results of the simulations were compared by Ralph Hoffman, professor of psychiatry at the Yale School of Medicine, to what he saw when studying human schizophrenics.

In order to model the process, Grasemann and Miikkulainen began by teaching a series of simple stories to DISCERN. The stories were assimilated into DISCERN's memory in much the way the human brain stores information-not as distinct units, but as statistical relationships of words, sentences, scripts and stories.

"With neural networks, you basically train them by showing them examples, over and over and over again," says Grasemann."Every time you show it an example, you say, if this is the input, then this should be your output, and if this is the input, then that should be your output. You do it again and again thousands of times, and every time it adjusts a little bit more towards doing what you want. In the end, if you do it enough, the network has learned."

In order to model hyperlearning, Grasemann and Miikkulainen ran the system through its paces again, but with one key parameter altered. They simulated an excessive release of dopamine by increasing the system's learning rate-essentially telling it to stop forgetting so much.

"It's an important mechanism to be able to ignore things," says Grasemann."What we found is that if you crank up the learning rate in DISCERN high enough, it produces language abnormalities that suggest schizophrenia."

After being re-trained with the elevated learning rate, DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.

In another instance, DISCERN began showing evidence of"derailment"-replying to requests for a specific memory with a jumble of dissociated sentences, abrupt digressions and constant leaps from the first- to the third-person and back again.

"Information processing in neural networks tends to be like information processing in the human brain in many ways," says Grasemann."So the hope was that it would also break down in similar ways. And it did."

The parallel between their modified neural network and human schizophrenia isn't absolute proof the hyperlearning hypothesis is correct, says Grasemann. It is, however, support for the hypothesis, and also evidence of how useful neural networks can be in understanding the human brain.

"We have so much more control over neural networks than we could ever have over human subjects," he says."The hope is that this kind of modeling will help clinical research."


Source

Wednesday, May 4, 2011

Evolutionary Lessons for Wind Farm Efficiency

Senior Lecturer Dr Frank Neumann, from the School of Computer Science, is using a"selection of the fittest" step-by-step approach called"evolutionary algorithms" to optimise wind turbine placement. This takes into account wake effects, the minimum amount of land needed, wind factors and the complex aerodynamics of wind turbines.

"Renewable energy is playing an increasing role in the supply of energy worldwide and will help mitigate climate change," says Dr Neumann."To further increase the productivity of wind farms, we need to exploit methods that help to optimise their performance."

Dr Neumann says the question of exactly where wind turbines should be placed to gain maximum efficiency is highly complex."An evolutionary algorithm is a mathematical process where potential solutions keep being improved a step at a time until the optimum is reached," he says.

"You can think of it like parents producing a number of offspring, each with differing characteristics," he says."As with evolution, each population or 'set of solutions' from a new generation should get better. These solutions can be evaluated in parallel to speed up the computation."

Other biology-inspired algorithms to solve complex problems are based on ant colonies.

"Ant colony optimisation" uses the principle of ants finding the shortest way to a source of food from their nest.

"You can observe them in nature, they do it very efficiently communicating between each other using pheromone trails," says Dr Neumann."After a certain amount of time, they will have found the best route to the food -- problem solved. We can also solve human problems using the same principles through computer algorithms."

Dr Neumann has come to the University of Adelaide this year from Germany where he worked at the Max Planck Institute. He is working on wind turbine placement optimisation in collaboration with researchers at the Massachusetts Institute of Technology.

"Current approaches to solving this placement optimisation can only deal with a small number of turbines," Dr Neumann says."We have demonstrated an accurate and efficient algorithm for as many as 1000 turbines."

The researchers are now looking to fine-tune the algorithms even further using different models of wake effect and complex aerodynamic factors.


Source

Tuesday, April 12, 2011

Artificial Intelligence for Improving Data Processing

Within this framework, five leading scientists presented the latest advances in their research work on different aspects of AI. The speakers tackled issues ranging from the more theoretical such as algorithms capable of solving combinatorial problems to robots that can reason about emotions, systems that use vision to monitor activities, and automated players that learn how to win in a given situation."Inviting speakers from groups of references allows us to offer a panoramic view of the main problems and the techniques open in the area, including advances in video and multi-sensor systems, task planning, automated learning, games, and artificial consciousness or reasoning," the experts noted.

The participants from the AVIRES (The Artificial Vision and Real Time Systems) research group at the University of Udine gave a seminar on the introduction of data fusion techniques and distributed artificial vision. In particular, they dealt with automated surveillance systems with visual sensor networks, from basic techniques for image processing and object recognition to Bayesian reasoning for understanding activities and automated learning and data fusion to make high performance system. Dr.Simon Lucas, professor at the Essex University and editor in chief of IEEE Transactions on Computational Intelligence and AI in Games and a researcher focusing on the application of AI techniques on games, presented the latest trends in generation algorithms for game strategies. During his presentation, he pointed out the strength of UC3M in this area, citing its victory in two of the competitions held at the international level during the most recent edition of the Conference on Computational Intelligence and Games.

In addition, Enrico Giunchiglia, professor at the University of Genoa and former president of the Council of the International Conference on Automated Planning and Scheduling (ICAPS), described the most recent work in the area of logic satisfaction, which is rapidly growing due to its applications in circuit design and in task planning

Artificial Intelligence (IA) is as old as computer science and has generated ideas, techniques and applications that permit it to solve difficult problems. The field is very active and offers solutions to very diverse sectors. The number of industrial applications that have an AI technique is very high, and from the scientific point of view, there are many specialized journals and congresses. Furthermore, new lines of research are constantly being open and there is a still great room for improvement in knowledge transfer between researchers and industry. These are some of the main ideas gathered at the 4th International Seminar on New Issues on Artificial Intelligence), organized by the SCALAB group in the UC3M Computer Engineering Department at the Leganés campus of this Madrid university.

The future of Artificial Intelligence

This seminar also included a talk on the promising future of AI."The tremendous surge in the number of devices capable of capturing and processing information, together with the growth of the computing capacity and the advances in algorithms enormously boost the possibilities for practical application," the researchers from the SCALAB group pointed out. Among them we can cite the construction of computer programs that make life easier, which take decisions in complex environments or which allow problems to be solved in environments which are difficult to access for people," he noted. From the point of view of these research trends, more and more emphasis is being placed on developing systems capable of learning and demonstrating intelligent behavior without being tied to replicating a human model.

AI will allow advances in the development of systems capable of automatically understanding a situation and its context with the use of sensor data and information systems as well as establishing plans of action, from support applications to decision making within dynamic situations. According to the researchers, this is due to the rapid advances and the availability of sensor technology which provides a continuous flow of data about the environment, information that must be dealt with appropriately in a node of data fusion and information. Likewise, the development of sophisticated techniques for task planning allow plans of action to be composed, executed, checked for correct execution, and rectified in case of some failure, and finally to learn from mistakes made.

This technology has allowed a wide range of applications such as integrated systems for surveillance, monitoring and detecting anomalies, activity recognition, teleassistence systems, transport logistic planning, etc. According to Antonio Chella, Full Professor at the University of Palermo and expert in Artificial Consciousness, the future of AI will imply discovering a new meaning of the word"intelligence." Until now, it has been equated with automated reasoning in software systems, but in the future AI will tackle more daring concepts such as the incarnation of intelligence in robots, as well as emotions, and above all consciousness.


Source

Monday, April 11, 2011

Mapping the Brain: New Technique Poised to Untangle the Complexity of the Brain

A new area of research is emerging in the neuroscience known as 'connectomics'. With parallels to genomics, which maps the our genetic make-up, connectomics aims to map the brain's connections (known as 'synapses'). By mapping these connections -- and hence how information flows through the circuits of the brain -- scientists hope to understand how perceptions, sensations and thoughts are generated in the brain and how these functions go wrong in diseases such as Alzheimer's disease, schizophrenia and stroke.

Mapping the brain's connections is no trivial task, however: there are estimated to be one hundred billion nerve cells ('neurons') in the brain, each connected to thousands of other nerve cells -- making an estimated 150 trillion synapses. Dr Tom Mrsic-Flogel, a Wellcome Trust Research Career Development Fellow at UCL (University College London), has been leading a team of researchers trying to make sense of this complexity.

"How do we figure out how the brain's neural circuitry works?" he asks."We first need to understand the function of each neuron and find out to which other brain cells it connects. If we can find a way of mapping the connections between nerve cells of certain functions, we will then be in a position to begin developing a computer model to explain how the complex dynamics of neural networks generate thoughts, sensations and movements."

Nerve cells in different areas of the brain perform different functions. Dr Mrsic-Flogel and colleagues focus on the visual cortex, which processes information from the eye. For example, some neurons in this part of the brain specialise in detecting the edges in images; some will activate upon detection of a horizontal edge, others by a vertical edge. Higher up in visual hierarchy, some neurons respond to more complex visual features such as faces: lesions to this area of the brain can prevent people from being able to recognise faces, even though they can recognise individual features such as eyes and the nose, as was famously described in the book The Man Who Mistook Wife for a Hat by Oliver Sachs.

In a study published online April 10 in the journalNature, the team at UCL describe a technique developed in mice which enables them to combine information about the function of neurons together with details of their synaptic connections.

The researchers looked into the visual cortex of the mouse brain, which contains thousands of neurons and millions of different connections. Using high resolution imaging, they were able to detect which of these neurons responded to a particular stimulus, for example a horizontal edge.

Taking a slice of the same tissue, the researchers then applied small currents to a subset of neurons in turn to see which other neurons responded -- and hence which of these were synaptically connected. By repeating this technique many times, the researchers were able to trace the function and connectivity of hundreds of nerve cells in visual cortex.

The study has resolved the debate about whether local connections between neurons are random -- in other words, whether nerve cells connect sporadically, independent of function -- or whether they are ordered, for example constrained by the properties of the neuron in terms of how it responds to particular stimuli. The researchers showed that neurons which responded very similarly to visual stimuli, such as those which respond to edges of the same orientation, tend to connect to each other much more than those that prefer different orientations.

Using this technique, the researchers hope to begin generating a wiring diagram of a brain area with a particular behavioural function, such as the visual cortex. This knowledge is important for understanding the repertoire of computations carried out by neurons embedded in these highly complex circuits. The technique should also help reveal the functional circuit wiring of regions that underpin touch, hearing and movement.

"We are beginning to untangle the complexity of the brain," says Dr Mrsic-Flogel."Once we understand the function and connectivity of nerve cells spanning different layers of the brain, we can begin to develop a computer simulation of how this remarkable organ works. But it will take many years of concerted efforts amongst scientists and massive computer processing power before it can be realised."

The research was supported by the Wellcome Trust, the European Research Council, the European Molecular Biology Organisation, the Medical Research Council, the Overseas Research Students Award Scheme and UCL.

"The brain is an immensely complex organ and understanding its inner workings is one of science's ultimate goals," says Dr John Williams, Head of Neuroscience and Mental Health at the Wellcome Trust."This important study presents neuroscientists with one of the key tools that will help them begin to navigate and survey the landscape of the brain."


Source

Wednesday, March 9, 2011

Extremely Fast Magnetic Random Access Memory (MRAM) Computer Data Storage Within Reach

An invention made by the Physikalisch-Technische Bundesanstalt (PTB) changes this situation: A special chip connection, in association with dynamic triggering of the component, reduces the response from -- so far -- 2 ns to below 500 ps. This corresponds to a data rate of up to 2 GBit (instead of the approx. 400 MBit so far). Power consumption and the thermal load will be reduced, as well as the bit error rate. The European patent is just being granted this spring; the US patent was already granted in 2010. An industrial partner for further development and manufacturing such MRAMs under licence is still being searched for.

Fast computer storage chips like DRAM and SRAM (Dynamic and Static Random Access Memory) which are commonly used today, have one decisive disadvantage: in the case of an interruption of the power supply, the information stored on them is irrevocably lost. The MRAM promises to put an end to this. In the MRAM, the digital information is not stored in the form of an electric charge, but via the magnetic alignment of storage cells (magnetic spins). MRAMs are very universal storage chips because they allow -- in addition to the non-volatile information storage -- also faster access, a high integration density and an unlimited number of writing and reading cycles.

However, the current MRAM models are not yet fast enough to outperform the best competitors. The time for programming a magnetic bit amounts to approx. 2 ns. Whoever wants to speed this up, reaches certain limits which have something to do with the fundamental physical properties of magnetic storage cells: during the programming process, not only the desired storage cell is magnetically excited, but also a large number of other cells. These excitations -- the so-called magnetic ringing -- are only slightly attenuated, their decay can take up to approx. 2 ns, and during this time, no other cell of the MRAM chip can be programmed. As a result, the maximum clock rate of MRAM is, so far, limited to approx. 400 MHz.

Until now, all experiments made to increase the velocity have led to intolerable write errors. Now, PTB scientists have optimized the MRAM design and integrated the so-called ballistic bit triggering which has also been developed at PTB. Here, the magnetic pulses which serve for the programming are selected in such a skilful way that the other cells in the MRAM are hardly magnetically excited at all. The pulse ensures that the magnetization of a cell which is to be switched performs half a precision rotation (180°), while a cell whose storage state is to remain unchanged performs a complete precision rotation (360°). In both cases, the magnetization is in the state of equilibrium after the magnetic pulse has decayed, and magnetic excitations do not occur any more.

This optimal bit triggering also works with ultra-short switching pulses with a duration below 500 ps. The maximum clock rates of the MRAM are, therefore, above 2 GHz. In addition, several bits can be programmed at the same time which would allow the effective write rate per bit to be increased again by more than one order. This invention allows clock rates to be achieved with MRAM which can compete with those of the fastest volatile storage components.


Source