Friday, May 13, 2011

New Algorithm Offers Ability to Influence Systems Such as Living Cells or Social Networks

However, an MIT researcher has come up with a new computational model that can analyze any type of complex network -- biological, social or electronic -- and reveal the critical points that can be used to control the entire system.

Potential applications of this work, which appears as the cover story in the May 12 issue ofNature, include reprogramming adult cells and identifying new drug targets, says study author Jean-Jacques Slotine, an MIT professor of mechanical engineering and brain and cognitive sciences.

Slotine and his co-authors applied their model to dozens of real-life networks, including cell-phone networks, social networks, the networks that control gene expression in cells and the neuronal network of the C. elegans worm. For each, they calculated the percentage of points that need to be controlled in order to gain control of the entire system.

For sparse networks such as gene regulatory networks, they found the number is high, around 80 percent. For dense networks -- such as neuronal networks -- it's more like 10 percent.

The paper, a collaboration with Albert-Laszlo Barabasi and Yang-Yu Liu of Northeastern University, builds on more than half a century of research in the field of control theory.

Control theory -- the study of how to govern the behavior of dynamic systems -- has guided the development of airplanes, robots, cars and electronics. The principles of control theory allow engineers to design feedback loops that monitor input and output of a system and adjust accordingly. One example is the cruise control system in a car.

However, while commonly used in engineering, control theory has been applied only intermittently to complex, self-assembling networks such as living cells or the Internet, Slotine says. Control research on large networks has been concerned mostly with questions of synchronization, he says.

In the past 10 years, researchers have learned a great deal about the organization of such networks, in particular their topology -- the patterns of connections between different points, or nodes, in the network. Slotine and his colleagues applied traditional control theory to these recent advances, devising a new model for controlling complex, self-assembling networks.

"The area of control of networks is a very important one, and although much work has been done in this area, there are a number of open problems of outstanding practical significance," says Adilson Motter, associate professor of physics at Northwestern University. The biggest contribution of the paper by Slotine and his colleagues is to identify the type of nodes that need to be targeted in order to control complex networks, says Motter, who was not involved with this research.

The researchers started by devising a new computer algorithm to determine how many nodes in a particular network need to be controlled in order to gain control of the entire network. (Examples of nodes include members of a social network, or single neurons in the brain.)

"The obvious answer is to put input to all of the nodes of the network, and you can, but that's a silly answer," Slotine says."The question is how to find a much smaller set of nodes that allows you to do that."

There are other algorithms that can answer this question, but most of them take far too long -- years, even. The new algorithm quickly tells you both how many points need to be controlled, and where those points -- known as"driver nodes" -- are located.

Next, the researchers figured out what determines the number of driver nodes, which is unique to each network. They found that the number depends on a property called"degree distribution," which describes the number of connections per node.

A higher average degree (meaning the points are densely connected) means fewer nodes are needed to control the entire network. Sparse networks, which have fewer connections, are more difficult to control, as are networks where the node degrees are highly variable.

In future work, Slotine and his collaborators plan to delve further into biological networks, such as those governing metabolism. Figuring out how bacterial metabolic networks are controlled could help biologists identify new targets for antibiotics by determining which points in the network are the most vulnerable.


Source

Saturday, May 7, 2011

Robot Engages Novice Computer Scientists

A product of CMU's famed Robotics Institute, Finch was designed specifically to make introductory computer science classes an engaging experience once again.

A white plastic, two-wheeled robot with bird-like features, Finch can quickly be programmed by a novice to say"Hello, World," or do a little dance, or make its beak glow blue in response to cold temperature or some other stimulus. But the simple look of the tabletop robot is deceptive. Based on four years of educational research sponsored by the National Science Foundation, Finch includes a number of features that could keep students busy for a semester or more thinking up new things to do with it.

"Students are more interested and more motivated when they can work with something interactive and create programs that operate in the real world," said Tom Lauwers, who earned his Ph.D. in robotics at CMU in 2010 and is now an instructor in the Robotics Institute's CREATE Lab."We packed Finch with sensors and mechanisms that engage the eyes, the ears -- as many senses as possible."

Lauwers has launched a startup company, BirdBrain Technologies, to produce Finch and now sells them online atwww.finchrobot.comfor$99 each.

"Our vision is to make Finch affordable enough that every student can have one to take home for assignments," said Lauwers, who developed the robot with Illah Nourbakhsh, associate professor of robotics and director of the CREATE Lab. Less than a foot long, Finch easily fits in a backpack and is rugged enough to survive being hauled around and occasionally dropped.

Finch includes temperature and light sensors, a three-axis accelerometer and a bump sensor. It has color-programmable LED lights, a beeper and speakers. With a pencil inserted in its tail, Finch can be used to draw pictures. It can be programmed to be a moving, noise-making alarm clock. It even has uses beyond a robot; its accelerometer enables it to be used as a 3-D mouse to control a computer display.

Robot kits suitable for students as young as 12 are commercially available, but often cost more than the Finch, Lauwers said. What's more, the idea is to use the robot to make computer programming lessons more interesting, not to use precious instructional time to first build a robot.

Finch is a plug-and-play device, so no drivers or other software must be installed beyond what is used in typical computer science courses. Finch connects with and receives power from the computer over a 15-foot USB cable, eliminating batteries and off-loading its computation to the computer. Support for a wide range of programming languages and environments is coming, including graphical languages appropriate for young students. Finch currently can be programmed with the Java and Python languages widely used by educators.

A number of assignments are available on the Finch Robot website to help teachers drop Finch into their lesson plans, and the website allows instructors to upload their own assignments or ideas in return for company-provided incentives. The robot has been classroom-tested at the Community College of Allegheny County, Pa., and by instructors in high school, university and after-school programs.

"Computer science now touches virtually every scientific discipline and is a critical part of most new technologies, yet U.S. universities saw declining enrollments in computer science through most of the past decade," Nourbakhsh said."If Finch can help motivate students to give computer science a try, we think many more students will realize that this is a field that they would enjoy exploring."


Source

Friday, May 6, 2011

Scientists Afflict Computers With 'Schizophrenia' to Better Understand the Human Brain

The researchers used a virtual computer model, or"neural network," to simulate the excessive release of dopamine in the brain. They found that the network recalled memories in a distinctly schizophrenic-like fashion.

Their results were published in April inBiological Psychiatry.

"The hypothesis is that dopamine encodes the importance-the salience-of experience," says Uli Grasemann, a graduate student in the Department of Computer Science at The University of Texas at Austin."When there's too much dopamine, it leads to exaggerated salience, and the brain ends up learning from things that it shouldn't be learning from."

The results bolster a hypothesis known in schizophrenia circles as the hyperlearning hypothesis, which posits that people suffering from schizophrenia have brains that lose the ability to forget or ignore as much as they normally would. Without forgetting, they lose the ability to extract what's meaningful out of the immensity of stimuli the brain encounters. They start making connections that aren't real, or drowning in a sea of so many connections they lose the ability to stitch together any kind of coherent story.

The neural network used by Grasemann and his adviser, Professor Risto Miikkulainen, is called DISCERN. Designed by Miikkulainen, DISCERN is able to learn natural language. In this study it was used to simulate what happens to language as the result of eight different types of neurological dysfunction. The results of the simulations were compared by Ralph Hoffman, professor of psychiatry at the Yale School of Medicine, to what he saw when studying human schizophrenics.

In order to model the process, Grasemann and Miikkulainen began by teaching a series of simple stories to DISCERN. The stories were assimilated into DISCERN's memory in much the way the human brain stores information-not as distinct units, but as statistical relationships of words, sentences, scripts and stories.

"With neural networks, you basically train them by showing them examples, over and over and over again," says Grasemann."Every time you show it an example, you say, if this is the input, then this should be your output, and if this is the input, then that should be your output. You do it again and again thousands of times, and every time it adjusts a little bit more towards doing what you want. In the end, if you do it enough, the network has learned."

In order to model hyperlearning, Grasemann and Miikkulainen ran the system through its paces again, but with one key parameter altered. They simulated an excessive release of dopamine by increasing the system's learning rate-essentially telling it to stop forgetting so much.

"It's an important mechanism to be able to ignore things," says Grasemann."What we found is that if you crank up the learning rate in DISCERN high enough, it produces language abnormalities that suggest schizophrenia."

After being re-trained with the elevated learning rate, DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.

In another instance, DISCERN began showing evidence of"derailment"-replying to requests for a specific memory with a jumble of dissociated sentences, abrupt digressions and constant leaps from the first- to the third-person and back again.

"Information processing in neural networks tends to be like information processing in the human brain in many ways," says Grasemann."So the hope was that it would also break down in similar ways. And it did."

The parallel between their modified neural network and human schizophrenia isn't absolute proof the hyperlearning hypothesis is correct, says Grasemann. It is, however, support for the hypothesis, and also evidence of how useful neural networks can be in understanding the human brain.

"We have so much more control over neural networks than we could ever have over human subjects," he says."The hope is that this kind of modeling will help clinical research."


Source

Wednesday, May 4, 2011

Evolutionary Lessons for Wind Farm Efficiency

Senior Lecturer Dr Frank Neumann, from the School of Computer Science, is using a"selection of the fittest" step-by-step approach called"evolutionary algorithms" to optimise wind turbine placement. This takes into account wake effects, the minimum amount of land needed, wind factors and the complex aerodynamics of wind turbines.

"Renewable energy is playing an increasing role in the supply of energy worldwide and will help mitigate climate change," says Dr Neumann."To further increase the productivity of wind farms, we need to exploit methods that help to optimise their performance."

Dr Neumann says the question of exactly where wind turbines should be placed to gain maximum efficiency is highly complex."An evolutionary algorithm is a mathematical process where potential solutions keep being improved a step at a time until the optimum is reached," he says.

"You can think of it like parents producing a number of offspring, each with differing characteristics," he says."As with evolution, each population or 'set of solutions' from a new generation should get better. These solutions can be evaluated in parallel to speed up the computation."

Other biology-inspired algorithms to solve complex problems are based on ant colonies.

"Ant colony optimisation" uses the principle of ants finding the shortest way to a source of food from their nest.

"You can observe them in nature, they do it very efficiently communicating between each other using pheromone trails," says Dr Neumann."After a certain amount of time, they will have found the best route to the food -- problem solved. We can also solve human problems using the same principles through computer algorithms."

Dr Neumann has come to the University of Adelaide this year from Germany where he worked at the Max Planck Institute. He is working on wind turbine placement optimisation in collaboration with researchers at the Massachusetts Institute of Technology.

"Current approaches to solving this placement optimisation can only deal with a small number of turbines," Dr Neumann says."We have demonstrated an accurate and efficient algorithm for as many as 1000 turbines."

The researchers are now looking to fine-tune the algorithms even further using different models of wake effect and complex aerodynamic factors.


Source

Tuesday, April 12, 2011

Artificial Intelligence for Improving Data Processing

Within this framework, five leading scientists presented the latest advances in their research work on different aspects of AI. The speakers tackled issues ranging from the more theoretical such as algorithms capable of solving combinatorial problems to robots that can reason about emotions, systems that use vision to monitor activities, and automated players that learn how to win in a given situation."Inviting speakers from groups of references allows us to offer a panoramic view of the main problems and the techniques open in the area, including advances in video and multi-sensor systems, task planning, automated learning, games, and artificial consciousness or reasoning," the experts noted.

The participants from the AVIRES (The Artificial Vision and Real Time Systems) research group at the University of Udine gave a seminar on the introduction of data fusion techniques and distributed artificial vision. In particular, they dealt with automated surveillance systems with visual sensor networks, from basic techniques for image processing and object recognition to Bayesian reasoning for understanding activities and automated learning and data fusion to make high performance system. Dr.Simon Lucas, professor at the Essex University and editor in chief of IEEE Transactions on Computational Intelligence and AI in Games and a researcher focusing on the application of AI techniques on games, presented the latest trends in generation algorithms for game strategies. During his presentation, he pointed out the strength of UC3M in this area, citing its victory in two of the competitions held at the international level during the most recent edition of the Conference on Computational Intelligence and Games.

In addition, Enrico Giunchiglia, professor at the University of Genoa and former president of the Council of the International Conference on Automated Planning and Scheduling (ICAPS), described the most recent work in the area of logic satisfaction, which is rapidly growing due to its applications in circuit design and in task planning

Artificial Intelligence (IA) is as old as computer science and has generated ideas, techniques and applications that permit it to solve difficult problems. The field is very active and offers solutions to very diverse sectors. The number of industrial applications that have an AI technique is very high, and from the scientific point of view, there are many specialized journals and congresses. Furthermore, new lines of research are constantly being open and there is a still great room for improvement in knowledge transfer between researchers and industry. These are some of the main ideas gathered at the 4th International Seminar on New Issues on Artificial Intelligence), organized by the SCALAB group in the UC3M Computer Engineering Department at the Leganés campus of this Madrid university.

The future of Artificial Intelligence

This seminar also included a talk on the promising future of AI."The tremendous surge in the number of devices capable of capturing and processing information, together with the growth of the computing capacity and the advances in algorithms enormously boost the possibilities for practical application," the researchers from the SCALAB group pointed out. Among them we can cite the construction of computer programs that make life easier, which take decisions in complex environments or which allow problems to be solved in environments which are difficult to access for people," he noted. From the point of view of these research trends, more and more emphasis is being placed on developing systems capable of learning and demonstrating intelligent behavior without being tied to replicating a human model.

AI will allow advances in the development of systems capable of automatically understanding a situation and its context with the use of sensor data and information systems as well as establishing plans of action, from support applications to decision making within dynamic situations. According to the researchers, this is due to the rapid advances and the availability of sensor technology which provides a continuous flow of data about the environment, information that must be dealt with appropriately in a node of data fusion and information. Likewise, the development of sophisticated techniques for task planning allow plans of action to be composed, executed, checked for correct execution, and rectified in case of some failure, and finally to learn from mistakes made.

This technology has allowed a wide range of applications such as integrated systems for surveillance, monitoring and detecting anomalies, activity recognition, teleassistence systems, transport logistic planning, etc. According to Antonio Chella, Full Professor at the University of Palermo and expert in Artificial Consciousness, the future of AI will imply discovering a new meaning of the word"intelligence." Until now, it has been equated with automated reasoning in software systems, but in the future AI will tackle more daring concepts such as the incarnation of intelligence in robots, as well as emotions, and above all consciousness.


Source

Monday, April 11, 2011

Mapping the Brain: New Technique Poised to Untangle the Complexity of the Brain

A new area of research is emerging in the neuroscience known as 'connectomics'. With parallels to genomics, which maps the our genetic make-up, connectomics aims to map the brain's connections (known as 'synapses'). By mapping these connections -- and hence how information flows through the circuits of the brain -- scientists hope to understand how perceptions, sensations and thoughts are generated in the brain and how these functions go wrong in diseases such as Alzheimer's disease, schizophrenia and stroke.

Mapping the brain's connections is no trivial task, however: there are estimated to be one hundred billion nerve cells ('neurons') in the brain, each connected to thousands of other nerve cells -- making an estimated 150 trillion synapses. Dr Tom Mrsic-Flogel, a Wellcome Trust Research Career Development Fellow at UCL (University College London), has been leading a team of researchers trying to make sense of this complexity.

"How do we figure out how the brain's neural circuitry works?" he asks."We first need to understand the function of each neuron and find out to which other brain cells it connects. If we can find a way of mapping the connections between nerve cells of certain functions, we will then be in a position to begin developing a computer model to explain how the complex dynamics of neural networks generate thoughts, sensations and movements."

Nerve cells in different areas of the brain perform different functions. Dr Mrsic-Flogel and colleagues focus on the visual cortex, which processes information from the eye. For example, some neurons in this part of the brain specialise in detecting the edges in images; some will activate upon detection of a horizontal edge, others by a vertical edge. Higher up in visual hierarchy, some neurons respond to more complex visual features such as faces: lesions to this area of the brain can prevent people from being able to recognise faces, even though they can recognise individual features such as eyes and the nose, as was famously described in the book The Man Who Mistook Wife for a Hat by Oliver Sachs.

In a study published online April 10 in the journalNature, the team at UCL describe a technique developed in mice which enables them to combine information about the function of neurons together with details of their synaptic connections.

The researchers looked into the visual cortex of the mouse brain, which contains thousands of neurons and millions of different connections. Using high resolution imaging, they were able to detect which of these neurons responded to a particular stimulus, for example a horizontal edge.

Taking a slice of the same tissue, the researchers then applied small currents to a subset of neurons in turn to see which other neurons responded -- and hence which of these were synaptically connected. By repeating this technique many times, the researchers were able to trace the function and connectivity of hundreds of nerve cells in visual cortex.

The study has resolved the debate about whether local connections between neurons are random -- in other words, whether nerve cells connect sporadically, independent of function -- or whether they are ordered, for example constrained by the properties of the neuron in terms of how it responds to particular stimuli. The researchers showed that neurons which responded very similarly to visual stimuli, such as those which respond to edges of the same orientation, tend to connect to each other much more than those that prefer different orientations.

Using this technique, the researchers hope to begin generating a wiring diagram of a brain area with a particular behavioural function, such as the visual cortex. This knowledge is important for understanding the repertoire of computations carried out by neurons embedded in these highly complex circuits. The technique should also help reveal the functional circuit wiring of regions that underpin touch, hearing and movement.

"We are beginning to untangle the complexity of the brain," says Dr Mrsic-Flogel."Once we understand the function and connectivity of nerve cells spanning different layers of the brain, we can begin to develop a computer simulation of how this remarkable organ works. But it will take many years of concerted efforts amongst scientists and massive computer processing power before it can be realised."

The research was supported by the Wellcome Trust, the European Research Council, the European Molecular Biology Organisation, the Medical Research Council, the Overseas Research Students Award Scheme and UCL.

"The brain is an immensely complex organ and understanding its inner workings is one of science's ultimate goals," says Dr John Williams, Head of Neuroscience and Mental Health at the Wellcome Trust."This important study presents neuroscientists with one of the key tools that will help them begin to navigate and survey the landscape of the brain."


Source

Wednesday, March 9, 2011

Extremely Fast Magnetic Random Access Memory (MRAM) Computer Data Storage Within Reach

An invention made by the Physikalisch-Technische Bundesanstalt (PTB) changes this situation: A special chip connection, in association with dynamic triggering of the component, reduces the response from -- so far -- 2 ns to below 500 ps. This corresponds to a data rate of up to 2 GBit (instead of the approx. 400 MBit so far). Power consumption and the thermal load will be reduced, as well as the bit error rate. The European patent is just being granted this spring; the US patent was already granted in 2010. An industrial partner for further development and manufacturing such MRAMs under licence is still being searched for.

Fast computer storage chips like DRAM and SRAM (Dynamic and Static Random Access Memory) which are commonly used today, have one decisive disadvantage: in the case of an interruption of the power supply, the information stored on them is irrevocably lost. The MRAM promises to put an end to this. In the MRAM, the digital information is not stored in the form of an electric charge, but via the magnetic alignment of storage cells (magnetic spins). MRAMs are very universal storage chips because they allow -- in addition to the non-volatile information storage -- also faster access, a high integration density and an unlimited number of writing and reading cycles.

However, the current MRAM models are not yet fast enough to outperform the best competitors. The time for programming a magnetic bit amounts to approx. 2 ns. Whoever wants to speed this up, reaches certain limits which have something to do with the fundamental physical properties of magnetic storage cells: during the programming process, not only the desired storage cell is magnetically excited, but also a large number of other cells. These excitations -- the so-called magnetic ringing -- are only slightly attenuated, their decay can take up to approx. 2 ns, and during this time, no other cell of the MRAM chip can be programmed. As a result, the maximum clock rate of MRAM is, so far, limited to approx. 400 MHz.

Until now, all experiments made to increase the velocity have led to intolerable write errors. Now, PTB scientists have optimized the MRAM design and integrated the so-called ballistic bit triggering which has also been developed at PTB. Here, the magnetic pulses which serve for the programming are selected in such a skilful way that the other cells in the MRAM are hardly magnetically excited at all. The pulse ensures that the magnetization of a cell which is to be switched performs half a precision rotation (180°), while a cell whose storage state is to remain unchanged performs a complete precision rotation (360°). In both cases, the magnetization is in the state of equilibrium after the magnetic pulse has decayed, and magnetic excitations do not occur any more.

This optimal bit triggering also works with ultra-short switching pulses with a duration below 500 ps. The maximum clock rates of the MRAM are, therefore, above 2 GHz. In addition, several bits can be programmed at the same time which would allow the effective write rate per bit to be increased again by more than one order. This invention allows clock rates to be achieved with MRAM which can compete with those of the fastest volatile storage components.


Source

Saturday, March 5, 2011

Human Cues Used to Improve Computer User-Friendliness

"Our research in computer graphics and computer vision tries to make using computers easier," says the Binghamton University computer scientist."Can we find a more comfortable, intuitive and intelligent way to use the computer? It should feel like you're talking to a friend. This could also help disabled people use computers the way everyone else does."

Yin's team has developed ways to provide information to the computer based on where a user is looking as well as through gestures or speech. One of the basic challenges in this area is"computer vision." That is, how can a simple webcam work more like the human eye? Can camera-captured data understand a real-world object? Can this data be used to"see" the user and"understand" what the user wants to do?

To some extent, that's already possible. Witness one of Yin's graduate students giving a PowerPoint presentation and using only his eyes to highlight content on various slides. When Yin demonstrated this technology for Air Force experts last year, the only hardware he brought was a webcam attached to a laptop computer.

Yin says the next step would be enabling the computer to recognize a user's emotional state. He works with a well-established set of six basic emotions -- anger, disgust, fear, joy, sadness, and surprise -- and is experimenting with different ways to allow the computer to distinguish among them. Is there enough data in the way the lines around the eyes change? Could focusing on the user's mouth provide sufficient clues? What happens if the user's face is only partially visible, perhaps turned to one side?

"Computers only understand zeroes and ones," Yin says."Everything is about patterns. We want to find out how to recognize each emotion using only the most important features."

He's partnering with Binghamton University psychologist Peter Gerhardstein to explore ways this work could benefit children with autism. Many people with autism have difficulty interpreting others' emotions; therapists sometimes use photographs of people to teach children how to understand when someone is happy or sad and so forth. Yin could produce not just photographs, but three-dimensional avatars that are able to display a range of emotions. Given the right pictures, Yin could even produce avatars of people from a child's family for use in this type of therapy.

Yin and Gerhardstein's previous collaboration led to the creation of a 3D facial expression database, which includes 100 subjects with 2,500 facial expression models. The database is available at no cost to the nonprofit research community and has become a worldwide test bed for those working on related projects in fields such as biomedicine, law enforcement and computer science.

Once Yin became interested in human-computer interaction, he naturally grew more excited about the possibilities for artificial intelligence.

"We want not only to create a virtual-person model, we want to understand a real person's emotions and feelings," Yin says."We want the computer to be able to understand how you feel, too. That's hard, even harder than my other work."

Imagine if a computer could understand when people are in pain. Some may ask a doctor for help. But others -- young children, for instance -- cannot express themselves or are unable to speak for some reason. Yin wants to develop an algorithm that would enable a computer to determine when someone is in pain based just on a photograph.

Yin describes that health-care application and, almost in the next breath, points out that the same system that could identify pain might also be used to figure out when someone is lying. Perhaps a computer could offer insights like the ones provided by Tim Roth's character, Dr. Cal Lightman, on the television show Lie to Me. The fictional character is a psychologist with an expertise in tracking deception who often partners with law-enforcement agencies.

"This technology," Yin says,"could help us to train the computer to do facial-recognition analysis in place of experts."


Source

Friday, March 4, 2011

Method Developed to Match Police Sketch, Mug Shot: Algorithms and Software Will Match Sketches With Mugshots in Police Databases

A team led by MSU University Distinguished Professor of Computer Science and Engineering Anil Jain and doctoral student Brendan Klare has developed a set of algorithms and created software that will automatically match hand-drawn facial sketches to mug shots that are stored in law enforcement databases.

Once in use, Klare said, the implications are huge.

"We're dealing with the worst of the worst here," he said."Police sketch artists aren't called in because someone stole a pack of gum. A lot of time is spent generating these facial sketches so it only makes sense that they are matched with the available technology to catch these criminals."

Typically, artists' sketches are drawn by artists from information obtained from a witness. Unfortunately, Klare said,"often the facial sketch is not an accurate depiction of what the person looks like."

There also are few commercial software programs available that produce sketches based on a witness' description. Those programs, however, tend to be less accurate than sketches drawn by a trained forensic artist.

The MSU project is being conducted in the Pattern Recognition and Image Processing lab in the Department of Computer Science and Engineering. It is the first large-scale experiment matching operational forensic sketches with photographs and, so far, results have been promising.

"We improved significantly on one of the top commercial face-recognition systems," Klare said."Using a database of more than 10,000 mug shot photos, 45 percent of the time we had the correct person."

All of the sketches used were from real crimes where the criminal was later identified.

"We don't match them pixel by pixel," said Jain, director of the PRIP lab."We match them up by finding high-level features from both the sketch and the photo; features such as the structural distribution and the shape of the eyes, nose and chin."

This project and its results appear in the March 2011 issue of the journalIEEE Transactions on Pattern Analysis and Machine Intelligence.

The MSU team plans to field test the system in about a year.

The sketches used in this research were provided by forensic artists Lois Gibson and Karen Taylor, and forensic sketch artists working for the Michigan State Police.


Source

Wednesday, March 2, 2011

New Software 'Lowers the Stress' on Materials Problems

The software package, OOF (Object-Oriented Finite element analysis) is a specialized tool to help materials designers understand how stress and other factors act on a material with a complex internal structure, as is the case with many alloys and ceramics. As its starting point, OOF uses micrographs -- images of a material taken by a microscope. At the simplest level, OOF is designed to answer questions like,"I know what this material looks like and what it's made of, but I wonder what would happen if I pull on it in different ways?" or"I have a picture of this stuff and I know that different parts expand more than others as temperature increases -- I wonder where the stresses are greatest?"

OOF has been available in previous versions since 1998, but the new version (2.1) that the NIST team released on Feb. 16, 2011, adds a number of improvements. According to team member Stephen Langer, version 2.1 is the first dramatic extension of the original capabilities of the software.

"Version 2.1 greatly improves OOF's ability to envision 'non-linear' behavior, such as large-scale deformation, which plays a significant role in many types of stress response," says Langer."It also allows users to analyze a material's performance over time, not just under static conditions as was the case previously."

Jet turbine blades, for example, can spin more efficiently with a layer of ceramic material sprayed onto their surfaces, but the ceramic layers are brittle. Knowing how these ceramic layers will respond as the metal blades heat up and expand over time is one of the many problems OOF 2.1 is designed to help solve.

"We've also included templates programmers can use to plug in their own details and formulas describing a particular substance," Langer says."We're trying to make it easy for users to test anything -- we're not concentrating on any particular type of material."

Later this year, the team expects to enable users to analyze three-dimensional micrographs of a material, rather than the 2-D"slices" that can be analyzed at this point.

* OOF is available for free download athttp://www.ctcms.nist.gov/oof/oof2/. The package runs on Unix™ like systems, including Linux, OS X and Linux-like environments within Windows.


Source

Thursday, February 24, 2011

A Semantic Sommelier: Wine Application Highlights the Power of Web 3.0

Web scientist and Rensselaer Polytechnic Institute Tetherless World Research Constellation Professor Deborah McGuinness has been developing a family of applications for the most tech-savvy wine connoisseurs since her days as a graduate student in the 1980s -- before what we now know as the World Wide Web had even been envisioned.

Today, McGuinness is among the world's foremost experts in Web ontology languages. These languages are used to encode meanings in a language that computers can understand. The most recent version of her wine application serves as an exceptional example of what the future of the World Wide Web, often called Web 3.0, might in fact look like. It is also an exceptional tool for teaching future Web Scientists about ontologies.

"The wine agent came about because I had to demonstrate the new technology that I was developing," McGuinness said."I had sophisticated applications that used cutting-edge artificial intelligence technology in domains, such as telecommunications equipment, that were difficult for anyone other than well-trained engineers to understand." McGuinness took the technology into the domain of wines and foods to create a program that she uses as a semantic tutorial, an"Ontologies 101" as she calls it. And students throughout the years have done many things with the wine agent including, most recently, experimentation with social media and mobile phone applications.

Today, the semantic sommelier is set to provide even the most novice of foodies some exciting new tools to expand their wine knowledge and food-pairing abilities on everything from their home PC to their smart phone. Evan Patton, a graduate student in computer science at Rensselaer, is the most recent student to tinker with the wine agent and is working with McGuinness to bring it into the mobile space on both the iPhone and Droid platforms.

The agent uses the Web Ontology Language (OWL), the formal language for the Semantic Web. Like the English language, which uses an agreed upon alphabet to form words and sentences that all English-speaking people can recognize, OWL uses a formalized set of symbols to create a code or language that a wide variety of applications can"read." This allows your computer to operate more efficiently and more intelligently with your cell phone or your Facebook page, or any other webpage or web-enabled device. These semantics also allow for an entirely new generation in smart search technologies.

Thanks to its semantic technology, the sommelier is input with basic background knowledge about wine and food. For wine, that includes its body, color (red versus white or blush), sweetness, and flavor. For food, this includes the course (e.g. appetizer versus entrée), ingredient type (e.g. fish versus meat), and its heat (mild versus spicy). The semantic technologies beneath the application then encode that knowledge and apply reasoning to search and share that information. This semantic functionality can now be exploited for a variety of culinary purposes, all of which McGuinness, a personal lover of fine wines, and Patton are working together on.

Having a spicy fish dish for dinner? Search within the system and it will arrive at a good wine pairing for the meal. Beyond basic pairings, the application has strong possibilities for use in individual restaurants, according to McGuinness, who envisions teaming up with restaurant owners to input their specific menus and wine lists. Thus, a diner could check menus and wine holdings before going out for dinner or they could enter a restaurant, pull out their smart phone, and instantly know what is in the wine cellar and goes best with that chef's clams casino. Beyond pairings, diners could rate different wines, providing fellow diners with personal reviews and the restaurateur with valuable information on what to stock up on next week. Is it a dry restaurant? The application could also be loaded up with the inventory within the liquor store down the street.

Beyond the table, the application can also be used to make personal wine suggestions and virtual wine cellars that you could share with your friends via Facebook or other social media platforms. It could also be used to manage a personal wine cellar, providing information on what is a peak flavor at the moment or what in your cellar would go best with your famous steak au poivre.

"Today we have 10 gadgets with us at any given time," McGuinness said."We live and breathe social media. With semantic technologies, we can offload more of the searching and reasoning required to locate and share information to the computer while still maintaining personal control over our information and how we use it. We also increase the ability of our technologies to interact with each other and decrease the need for as many gadgets or as many interactions with them since the applications do more work for us."


Source

Wednesday, February 23, 2011

Toward Computers That Fit on a Pen Tip: New Technologies Usher in the Millimeter-Scale Computing Era

And a compact radio that needs no tuning to find the right frequency could be a key enabler to organizing millimeter-scale systems into wireless sensor networks. These networks could one day track pollution, monitor structural integrity, perform surveillance, or make virtually any object smart and trackable.

Both developments at the University of Michigan are significant milestones in the march toward millimeter-scale computing, believed to be the next electronics frontier.

Researchers are presenting papers on each at the International Solid-State Circuits Conference (ISSCC) in San Francisco. The work is being led by three faculty members in the U-M Department of Electrical Engineering and Computer Science: professors Dennis Sylvester and David Blaauw, and assistant professor David Wentzloff.

Bell's Law and the promise of pervasive computing

Nearly invisible millimeter-scale systems could enable ubiquitous computing, and the researchers say that's the future of the industry. They point to Bell's Law, a corollary to Moore's Law. (Moore's says that the number of transistors on an integrated circuit doubles every two years, roughly doubling processing power.)

Bell's Law says there's a new class of smaller, cheaper computers about every decade. With each new class, the volume shrinks by two orders of magnitude and the number of systems per person increases. The law has held from 1960s' mainframes through the '80s' personal computers, the '90s' notebooks and the new millennium's smart phones.

"When you get smaller than hand-held devices, you turn to these monitoring devices," Blaauw said."The next big challenge is to achieve millimeter-scale systems, which have a host of new applications for monitoring our bodies, our environment and our buildings. Because they're so small, you could manufacture hundreds of thousands on one wafer. There could be 10s to 100s of them per person and it's this per capita increase that fuels the semiconductor industry's growth."

The first complete millimeter-scale system

Blaauw and Sylvester's new system is targeted toward medical applications. The work they present at ISSCC focuses on a pressure monitor designed to be implanted in the eye to conveniently and continuously track the progress of glaucoma, a potentially blinding disease. (The device is expected to be commercially available several years from now.)

In a package that's just over 1 cubic millimeter, the system fits an ultra low-power microprocessor, a pressure sensor, memory, a thin-film battery, a solar cell and a wireless radio with an antenna that can transmit data to an external reader device that would be held near the eye.

"This is the first true millimeter-scale complete computing system," Sylvester said.

"Our work is unique in the sense that we're thinking about complete systems in which all the components are low-power and fit on the chip. We can collect data, store it and transmit it. The applications for systems of this size are endless."

The processor in the eye pressure monitor is the third generation of the researchers' Phoenix chip, which uses a unique power gating architecture and an extreme sleep mode to achieve ultra-low power consumption. The newest system wakes every 15 minutes to take measurements and consumes an average of 5.3 nanowatts. To keep the battery charged, it requires exposure to 10 hours of indoor light each day or 1.5 hours of sunlight. It can store up to a week's worth of information.

While this system is miniscule and complete, its radio doesn't equip it to talk to other devices like it. That's an important feature for any system targeted toward wireless sensor networks.

A unique compact radio to enable wireless sensor networks

Wentzloff and doctoral student Kuo-Ken Huang have taken a step toward enabling such node-to-node communication. They've developed a consolidated radio with an on-chip antenna that doesn't need the bulky external crystal that engineers rely on today when two isolated devices need to talk to each other. The crystal reference keeps time and selects a radio frequency band. Integrating the antenna and eliminating this crystal significantly shrinks the radio system. Wentzloff's is less than 1 cubic millimeter in size.

He and Huang's key innovation is to engineer the new antenna to keep time on its own and serve as its own reference. By integrating the antenna through an advanced CMOS process, they can precisely control its shape and size and therefore how it oscillates in response to electrical signals.

"Antennas have a natural resonant frequency for electrical signals that is defined by their geometry, much like a pure audio tone on a tuning fork," Wentzloff said."By designing a circuit to monitor the signal on the antenna and measure how close it is to the antenna's natural resonance, we can lock the transmitted signal to the antenna's resonant frequency."

"This is the first integrated antenna that also serves as its own reference. The radio on our chip doesn't need external tuning. Once you deploy a network of these, they'll automatically align at the same frequency."

The researchers are now working on lowering the radio's power consumption so that it's compatible with millimeter-scale batteries.

Greg Chen, a doctoral student in the Department of Electrical Engineering and Computer Science, presents"A Cubic-Millimeter Energy-Autonomous Wireless Intraocular Pressure Monitor." The researchers are collaborating with Ken Wise, the William Gould Dow Distinguished University Professor of Electrical Engineering and Computer Science on the packaging of the sensor, and with Paul Lichter, chair of the Department of Ophthalmology and Visual Sciences at the U-M Medical School, for the implantation studies. Huang presents"A 60GHz Antenna-Referenced Frequency-Locked Loop in 0.13μm CMOS for Wireless Sensor Networks." This research is funded by the National Science Foundation. The university is pursuing patent protection for the intellectual property, and is seeking commercialization partners to help bring the technology to market.


Source

Thursday, February 17, 2011

Running on a Faster Track: Researchers Develop Scheduling Tool to Save Time on Public Transport

Dr. Tal Raviv and his graduate student Mor Kaspi of Tel Aviv University's Department of Industrial Engineering in the Iby and Aladar Fleischman Faculty of Engineering have developed a tool that makes passenger train journeys shorter, especially when transfers are involved -- a computer-based system to shave precious travel minutes off a passenger's journey.

Dr. Raviv's solution, the"Service Oriented Timetable," relies on computers and complicated algorithms to do the scheduling."Our solution is useful for any metropolitan region where passengers are transferring from one train to another, and where train service providers need to ensure that the highest number of travellers can make it from Point A to Point B as quickly as possible," says Dr. Raviv.

Saves time and resources

In the recent economic downturn, more people are seeking to scale back their monthly transportation costs. Public transportation is a win-win -- good for both the bank account and the environment. But when travel routes are complicated by transfers, it becomes a hard job to manage who can wait -- and who can't -- between trains.

Another factor is consumer preference. Ideally, each passenger would like a direct train to his destination, with no stops en route. But passengers with different itineraries must compete for the system's resources. Adding a stop at a certain station will improve service for passengers for whom the station is the final destination, but will cause a delay for passengers who are only passing through it. The question is how to devise a schedule which is fair for everyone. What are the decisions that will improve the overall condition of passengers in the train system?

It's not about adding more resources to the system, but more intelligently managing what's already there, Dr. Raviv explains.

More time on the train, less time on the platform

In their train timetabling system, Dr. Raviv and Kaspi study the timetables to find places in the train scheduling system that can be optimized so passengers make it to their final destination faster.

Traditionally, train planners looked for solutions based on the frequency of trains passing through certain stops. Dr. Raviv and Kaspi, however, are developing a high-tech solution for scheduling trains that considers the total travel time of passengers, including their waiting time at transfer stations.

"Let's say you commute to Manhattan from New Jersey every day. We can find a way to synchronize trains to minimize the average travel time of passengers," says Dr. Raviv."That will make people working in New York a lot happier."

The project has already been simulated on the Israel Railway, reducing the average travel time per commuter from 60 to 48 minutes. The tool can be most useful in countries and cities, he notes, where train schedules are robust and very complicated.

The researchers won a competition of the Railway Application Section of the International Institute for Operation Research and Management Science (INFORMS) last November for their computer program that optimizes a refuelling schedule for freight trains. Dr. Raviv also works on optimizing other forms of public transport, including the bike-sharing programs found in over 400 cities around the world today.


Source

Tuesday, February 15, 2011

Scientists Develop Control System to Allow Spacecraft to Think for Themselves

Professor Sandor Veres and his team of engineers have developed an artificially intelligent control system called 'sysbrain'.

Using natural language programming (NLP), the software agents can read special English language technical documents on control methods. This gives the vehicles advanced guidance, navigation and feedback capabilities to stop them crashing into other objects and the ability to adapt during missions, identify problems, carry out repairs and make their own decisions about how best to carry out a task.

Professor Veres, who is leading the EPSRC-funded project, says:"This is the world's first publishing system of technical knowledge for machines and opens the way for engineers to publish control instructions to machines directly. As well as spacecrafts and satellites, this innovative technology is transferable to other types of autonomous vehicles, such as autonomous underwater, ground and aerial vehicles."

To test the control systems that could be applied in a space environment, Professor Veres and his team constructed a unique test facility and a fleet of satellite models, which are controlled by the sysbrain cognitive agent control system.

The 'Autonomous Systems Testbed' consists of a glass covered precision level table, surrounded by a metal framework, which is used to mount overhead visual markers, observation cameras and isolation curtains to prevent any external light sources interfering with experimentation. Visual navigation is performed using onboard cameras to observe the overhead marker system located above the test area. This replicates how spacecraft would use points in the solar system to determine their orientation.

The perfectly-balanced model satellites, which rotate around a pivot point with mechanical properties similar to real satellites, are placed on the table and glide across it on roller bearings almost without friction to mimic the zero-gravity properties of space. Each model has eight propellers to control movement, a set of inertia sensors and additional cameras to be 'spatially aware' and to 'see' each other. The model's skeletal robot frame also allows various forms of hardware to be fitted and experimented with.

Professor Veres adds:"We have invented sysbrains to control intelligent machines. Sysbrain is a special breed of software agents with unique features such as natural language programming to create them, human-like reasoning, and most importantly they can read special English language documents in 'system English' or 'sEnglish'. Human authors of sEnglish documents can put them on the web as publications and sysbrain can read them to enhance their physical and problem solving skills. This allows engineers to write technical papers directly for sysbrain that control the machines."

Further information is available athttp://www.sesnet.soton.ac.uk/people/smv/avs_lab/index.htm.


Source

Monday, February 14, 2011

New Wireless Technology Developed for Faster, More Efficient Networks

"Wireless communication is a one-way street. Over."

Radio traffic can flow in only one direction at a time on a specific frequency, hence the frequent use of"over" by pilots and air traffic controllers, walkie-talkie users and emergency personnel as they take turns speaking.

But now, Stanford researchers have developed the first wireless radios that can send and receive signals at the same time.

This immediately makes them twice as fast as existing technology, and with further tweaking will likely lead to even faster and more efficient networks in the future.

"Textbooks say you can't do it," said Philip Levis, assistant professor of computer science and of electrical engineering."The new system completely reworks our assumptions about how wireless networks can be designed," he said.

Cell phone networks allow users to talk and listen simultaneously, but they use a work-around that is expensive and requires careful planning, making the technique less feasible for other wireless networks, including Wi-Fi.

Sparked from a simple idea

A trio of electrical engineering graduate students, Jung Il Choi, Mayank Jain and Kannan Srinivasan, began working on a new approach when they came up with a seemingly simple idea. What if radios could do the same thing our brains do when we listen and talk simultaneously: screen out the sound of our own voice?

In most wireless networks, each device has to take turns speaking or listening."It's like two people shouting messages to each other at the same time," said Levis."If both people are shouting at the same time, neither of them will hear the other."

It took the students several months to figure out how to build the new radio, with help from Levis and Sachin Katti, assistant professor of computer science and of electrical engineering.

Their main roadblock to two-way simultaneous conversation was this: Incoming signals are overwhelmed by the radio's own transmissions, making it impossible to talk and listen at the same time.

"When a radio is transmitting, its own transmission is millions, billions of times stronger than anything else it might hear {from another radio}," Levis said."It's trying to hear a whisper while you yourself are shouting."

But, the researchers realized, if a radio receiver could filter out the signal from its own transmitter, weak incoming signals could be heard."You can make it so you don't hear your own shout and you can hear someone else's whisper," Levis said.

Their setup takes advantage of the fact that each radio knows exactly what it's transmitting, and hence what its receiver should filter out. The process is analogous to noise-canceling headphones.

When the researchers demonstrated their device last fall at MobiCom 2010, an international gathering of more than 500 of the world's top experts in mobile networking, they won the prize for best demonstration. Until then, people didn't believe sending and receiving signals simultaneously could be done, Jain said. Levis said a researcher even told the students their idea was"so simple and effective, it won't work," because something that obvious must have already been tried unsuccessfully.

Breakthrough for communications technology

But work it did, with major implications for future communications networks. The most obvious effect of sending and receiving signals simultaneously is that it instantly doubles the amount of information you can send, Levis said. That means much-improved home and office networks that are faster and less congested.

But Levis also sees the technology having larger impacts, such as overcoming a major problem with air traffic control communications. With current systems, if two aircraft try to call the control tower at the same time on the same frequency, neither will get through. Levis says these blocked transmissions have caused aircraft collisions, which the new system would help prevent.

The group has a provisional patent on the technology and is working to commercialize it. They are currently trying to increase both the strength of the transmissions and the distances over which they work. These improvements are necessary before the technology is practical for use in Wi-Fi networks.

But even more promising are the system's implications for future networks. Once hardware and software are built to take advantage of simultaneous two-way transmission,"there's no predicting the scope of the results," Levis said.


Source

Thursday, February 3, 2011

Engineers Work to Increase the Speed and Accessibility of Future Wireless Systems

"The U.S. government has noted that broadband wireless access technologies are a key foundation for economic growth, job creation, global competitiveness, and a better way of life," explained Claudio da Silva, an assistant professor in Virginia Tech's Bradley Department of Electrical and Computer Engineering. He was referring to a recent report by the Federal Communications Commission on the need to ensure all Americans have access to broadband capability.

These spectrum-sensing technologies are envisioned to support high speed internet in rural areas, enable the creation of super Wi-Fi networks, and support the implementation of smart grid technologies. However, implementation of these technologies is seen as the"the greatest infrastructure challenge of the 21st century," according to the commission's report.

A major key to solving this challenge is in the design of wireless systems that more efficiently use the limited radio spectrum resources, said da Silva."As a means to achieve this goal, the U.S. government, through the Federal Communications Commission, has recently finalized rules to make the unused spectrum in the television band available to unlicensed broadband wireless systems. In these systems, devices first identify underutilized spectrum with the use of spectrum databases and/or spectrum sensing and then, following pre-defined rules, dynamically access the"best" frequency bands on an opportunistic and non-interfering basis."

"The U.S. government has plans to release even more spectrum for unlicensed broadband wireless access," added da Silva."While sensing is not a requirement for television band access, the Federal Communications Commission is encouraging the continued development of spectrum sensing techniques for potential use in these new bands."

"InterDigital's advanced wireless technology development efforts compliment this work at Virginia Tech," added James J. Nolan, InterDigital's executive vice-president of research and development."We see the evolution of wireless systems to dynamic spectrum management technologies as being key to solving the looming bandwidth supply-demand gap by more efficiently leveraging lightly used spectrum. These cognitive radio technologies are an integral part of our holistic bandwidth management strategy, and we have invested significantly in this area of research."

During the first phase of the study,"by exploiting location-dependent signal propagation characteristics, we have developed efficient sensing algorithms that enable a set of devices to work together to determine spectrum opportunities," said William Headley, of Ringgold, Va., one of the Ph.D. students working on this project.

For the second year of the study, the focus is changing to the design of spectrum sensing algorithms that are robust to both man-made noise and severe multipath fading."The vast majority of sensing algorithms were developed for channels in which the noise is a Gaussian process," said Gautham Chavali, of Blacksburg, Va., the second Ph.D. student working on this project."However, experimental studies have shown that the noise that appears in most radio channels is highly non-Gaussian," Chavali added.

"Man-made noise, which arises from incidental radiation of a wide range of electrical devices, for example, is partially responsible for this occurrence," Chavali said. In addition, the algorithms to be designed will not rely on the common, but impractical, assumption of perfect synchronization and equalization by the radio front-end, which is an important concern when dealing with realistic multipath fading channels, such as indoor environments.

InterDigital develops advanced wireless technologies that are at the core of mobile devices, networks, and services worldwide. Using a holistic approach to addressing the bandwidth crunch, the company is developing innovations in spectrum optimization, cross-network connectivity and mobility, and intelligent data. InterDigital has provided funding for this 30-month research project, including the donation of state of the art laboratory equipment that will support different wireless projects at Virginia Tech.


Source

Wednesday, February 2, 2011

Mathematical Model Could Help Predict and Prevent Future Extinctions

"Our study provides a theoretical basis for management efforts that would aim to mitigate extinction cascades in food web networks. There is evidence that a significant fraction of all extinctions are caused not by a primary perturbation but instead by the propagation of a cascade," said Motter.

Extinction cascades are often observed following the loss of a key species within an ecosystem. As the system changes to compensate for the loss, availability of food, territory and other resources to each of the remaining members can fluctuate wildly, creating a boom-or-bust environment that can lead to even more extinctions. According to the study, more than 70 percent of these extinctions are preventable, assuming that the system can be brought into balance using only available resources--no new factors may be introduced.

Motter explained further,"We find that extinction cascades can often be mitigated by suppressing--rather than enhancing--the populations of specific species. In numerous cases, it is predicted that even the proactive removal of a species that would otherwise be extinct by a cascade can prevent the extinction of other species."

The finding may seem counterintuitive to conservationists because the compensatory actions seem to inflict further damage to the system. However, when the entire ecosystem is considered, the effect is beneficial. This news holds promise for those charged with maintaining Earth's biodiversity and natural resources--the health of which can counteract many of the causes of climate change, and some man-made disasters such as the Gulf of Mexico oil spill.

The dodo bird,Raphus cucullatus, is one example of extinction due to human activity. The dodo was a large, flightless bird that became extinct in the 1600s. It is likely that a combination of factors including hunting, loss of habitat, and perhaps even a flash flood, stressed the ecosystem on the island of Mauritius, home of the dodo. Some researchers think that human introduction of non-native species, such as dogs, pigs, cats and rats to the island, is what ultimately lead to the demise of the dodo.

In any case, in the future, it may be possible to avoid extinction of some species in stressed ecosystems by applying the new method of analysis developed by Motter.

The goal of this project, funded by the National Science Foundation's Division of Mathematical Sciences, is to develop mathematical methods to study dynamical processes in complex networks. Although the specific application mentioned here may be useful in management of ecosystems, the mathematical foundation underlying the analysis is much more universal. The broad concept is innovative in the area of complex networks because it concludes that large-scale failures can be avoided by focusing on preventing the waves of failure that follow the initial event.

This approach could be used to stabilize a wide array of complex networks. It can apply to biochemical networks in order to slow or stop progression of diseases caused by variations inside individual cells. It can also be used to manage technological networks such as the smart grid to prevent blackouts. It can even apply to regulation of complicated financial networks by identifying key factors in the early stages of a financial downturn, which, when met with human intervention, could potentially save billions of dollars.

The world is a complicated place that gets even trickier when trying to mathematically explain a complex network, especially when the network evolves within an environment that is itself changing. But, Motter says his mathematical model is promising for the study of changing environments.

"Uncertainty itself is not a problem," he quipped."The problem comes when you cannot estimate uncertainty."


Source

Tuesday, February 1, 2011

Computer-Assisted Diagnosis Tools to Aid Pathologists

"The advent of digital whole-slide scanners in recent years has spurred a revolution in imaging technology for histopathology," according to Metin N. Gurcan, Ph.D., an associate professor of Biomedical Informatics at The Ohio State University Medical Center."The large multi-gigapixel images produced by these scanners contain a wealth of information potentially useful for computer-assisted disease diagnosis, grading and prognosis."

Follicular Lymphoma (FL) is one of the most common forms of non-Hodgkin Lymphoma occurring in the United States. FL is a cancer of the human lymph system that usually spreads into the blood, bone marrow and, eventually, internal organs.

A World Health Organization pathological grading system is applied to biopsy samples; doctors usually avoid prescribing severe therapies for lower grades, while they usually recommend radiation and chemotherapy regimens for more aggressive grades.

Accurate grading of the pathological samples generally leads to a promising prognosis, but diagnosis depends solely upon a labor-intensive process that can be affected by human factors such as fatigue, reader variation and bias. Pathologists must visually examine and grade the specimens through high-powered microscopes.

Processing and analysis of such high-resolution images, Gurcan points out, remain non-trivial tasks, not just because of the sheer size of the images, but also due to complexities of underlying factors involving differences in staining, illumination, instrumentation and goals. To overcome many of these obstacles to automation, Gurcan and medical center colleagues, Dr. Gerard Lozanski and Dr. Arwa Shana'ah, turned to the Ohio Supercomputer Center.

Ashok Krishnamurthy, Ph.D., interim co-executive director of the center, and Siddharth Samsi, a computational science researcher there and an OSU graduate student in Electrical and Computer Engineering, put the power of a supercomputer behind the process.

"Our group has been developing tools for grading of follicular lymphoma with promising results," said Samsi."We developed a new automated method for detecting lymph follicles using stained tissue by analyzing the morphological and textural features of the images, mimicking the process that a human expert might use to identify follicle regions. Using these results, we developed models to describe tissue histology for classification of FL grades."

Histological grading of FL is based on the number of large malignant cells counted in within tissue samples measuring just 0.159 square millimeters and taken from ten different locations. Based on these findings, FL is assigned to one of three increasing grades of malignancy: Grade I (0-5 cells), Grade II (6-15 cells) and Grade III (more than 15 cells).

"The first step involves identifying potentially malignant regions by combining color and texture features," Samsi explained."The second step applies an iterative watershed algorithm to separate merged regions and the final step involves eliminating false positives."

The large data sizes and complexity of the algorithms led Gurcan and Samsi to leverage the parallel computing resources of OSC's Glenn Cluster in order to reduce the time required to process the images. They used MATLAB® and the Parallel Computing Toolbox™ to achieve significant speed-ups. Speed is the goal of the National Cancer Institute-FUNDED research project, but accuracy is essential. Gurcan and Samsi compared their computer segmentation results with manual segmentation and found an average similarity score of 87.11 percent.

"This algorithm is the first crucial step in a computer-aided grading system for Follicular Lymphoma," Gurcan said."By identifying all the follicles in a digitized image, we can use the entire tissue section for grading of the disease, thus providing experts with another tool that can help improve the accuracy and speed of the diagnosis."


Source

Friday, January 21, 2011

New Device May Revolutionize Computer Memory

Traditionally, there are two types of computer memory devices. Slow memory devices are used in persistent data storage technologies such as flash drives. They allow us to save information for extended periods of time, and are therefore called nonvolatile devices. Fast memory devices allow our computers to operate quickly, but aren't able to save data when the computers are turned off. The necessity for a constant source of power makes them volatile devices.

But now a research team from NC State has developed a single"unified" device that can perform both volatile and nonvolatile memory operation and may be used in the main memory.

"We've invented a new device that may revolutionize computer memory," says Dr. Paul Franzon, a professor of electrical and computer engineering at NC State and co-author of a paper describing the research."Our device is called a double floating-gate field effect transistor (FET). Existing nonvolatile memory used in data storage devices utilizes a single floating gate, which stores charge in the floating gate to signify a 1 or 0 in the device -- or one 'bit' of information. By using two floating gates, the device can store a bit in a nonvolatile mode, and/or it can store a bit in a fast, volatile mode -- like the normal main memory on your computer."

The double floating-gate FET could have a significant impact on a number of computer problems. For example, it would allow computers to start immediately, because the computer wouldn't have to retrieve start-up data from its hard drive -- the data could be stored in its main memory.

The new device would also allow"power proportional computing." For example, Web server farms, such as those used by Google, consume an enormous amount of power -- even when there are low levels of user activity -- in part because the server farms can't turn off the power without affecting their main memory.

"The double floating-gate FET would help solve this problem," Franzon says,"because data could be stored quickly in nonvolatile memory -- and retrieved just as quickly. This would allow portions of the server memory to be turned off during periods of low use without affecting performance."

Franzon also notes that the research team has investigated questions about this technology's reliability, and that they think the device"can have a very long lifetime, when it comes to storing data in the volatile mode."

The paper,"Computing with Novel Floating-Gate Devices," will be published Feb. 10 in IEEE'sComputer. The paper was authored by Franzon; former NC State Ph.D. student Daniel Schinke; former NC State master's student Mihir Shiveshwarkar; and Dr. Neil Di Spigna, a research assistant professor at NC State. The research was funded by the National Science Foundation.

NC State's Department of Electrical and Computer Engineering is part of the university's College of Engineering.


Source

Thursday, January 20, 2011

Data Matrix Codes Used to Catalogue Archaeological Heritage

The marking of archaeological material, or coding, is the process in which archaeologists identify each of the artifacts discovered at a site through an identifier code which is currently applied manually to each item and which contains the name of the site, the archaeological level at which it was found and an inventory number. This information is essential because it remits to a complex network of data which contextualises each artifact individually.

Manual coding is a routine process which requires much time and effort, and in which many errors exists -- in some cases up to 40%. Moreover, with the pass of time the coding becomes unclear and this often may hinder subsequent studies. For this reason an important part of the work done in museums, especially with important artifacts or collection items, consists in recoding the objects.

The CEPAP team has achieved to reduce coding errors to 1% by applying a new digital cataloguing system used in several dig sites to register all types of collections.

To identify each object DM codes are applied directly to the objects. The codes adapt in proportion to the size of the identified artifact, up to a minimum of 3x3 millimetres. There are many advantages when these codes are compared to bar codes, a registry system which in past years was tested in different archaeological projects. Due to their size, in many cases bar codes cannot be applied directly to the objects and must be adhered to the bag containing the artifact. This however easily can give way to errors during the manipulation of the objects.

DM codes are printed with a program CEPAP designed for the firm IWS (Internet Web Serveis), one of the project collaborators, which makes it possible to introduce alphanumeric sequences, forming series with up to 20 digits to identify each of the objects.

Printed on polypropylene labels, the codes are adhered to the artifacts by first placing them between two layers of Paraloid B72, an acrylic resin widely used in restoration of archaeological material because of its durability and long-term protection of the label. If the label is damaged -- up to 30% of the code -- the information can be reprinted fully.

Each archaeological object contains an identifier code (site, archaeological unit and sequential name). The information of each code can be read using standard readers, video and photo cameras, mobile phone readers, etc. The data includes georeferenced information of the artifacts found at the sites and which are taken with a laser theodolite, as well as several quantitative or qualitative variables which are stored in electronic notebooks or PDAs. Therefore, every day when data is stored in the computer, archaeologists have access to an exhaustive and updated field inventory which includes all of the most recent findings. The program can design and modify quantitative and qualitative variables according to the precise needs of each research project.

In addition to representing a new technology application, the system offers other important advantages. The pilot project carried out in Spanish sites (Roca dels Bous and Cova Gran de Santa Linya in Lleida) and African sites (Olduvai Gorge in Tanzania and Mieso in Ethiopia) was directed by Dr Rafael Mora, director of the Centre and lecturer of Prehistory at UAB; Dr Paloma González and Dr Jorge Martínez Moreno. The new system demonstrates substantial advantages when compared to manual coding in terms of speed and reliability, as well as its easy incorporation into everyday archaeological research tasks.

That is why CEPAP researchers find it important for scientists and heritage managers in Spain to consider the possibility of adapting a unique automated registry and cataloguing system for archaeological material, relatively easy to use and fairly economical, which would allow to unify systems which are currently differentiated. At the same time it would give way to the development of digital applications such as data consultation via internet through databases combining DM code information and visual representations (drawings, photos or 3D scans), and cyberspace access to museum pieces, which would make it easier for both researchers and society in general to have access to cultural heritages.


Source

Saturday, January 15, 2011

Improving Plants: New Software Quantifies Leaf Venation Networks, Enables Plant Biology Advances

To help address the challenge of how to quickly examine a large quantity of leaves, researchers at the Georgia Institute of Technology have developed a user-assisted software tool that extracts macroscopic vein structures directly from leaf images.

"The software can be used to help identify genes responsible for key leaf venation network traits and to test ecological and evolutionary hypotheses regarding the structure and function of leaf venation networks," said Joshua Weitz, an assistant professor in the Georgia Tech School of Biology.

The program, called Leaf Extraction and Analysis Framework Graphical User Interface (LEAF GUI), enables scientists and breeders to measure the properties of thousands of veins much more quickly than manual image analysis tools.

Details of the LEAF GUI software program were published in the"Breakthrough Technologies" section of the January issue of the journalPlant Physiology. Development of the software, which is available for download atwww.leafgui.org, was supported by the Defense Advanced Research Projects Agency (DARPA) and the Burroughs Welcome Fund.

LEAF GUI is a user-assisted software tool that takes an image of a leaf and, following a series of interactive steps to clean up the image, returns information on the structure of that leaf's vein networks. Structural measurements include the dimensions, position and connectivity of all network veins, and the dimensions, shape and position of all non-vein areas, called areoles.

"The network extraction algorithms in LEAF GUI enable users with no technical expertise in image analysis to quantify the geometry of entire leaf networks -- overcoming what was previously a difficult task due to the size and complexity of leaf venation patterns," said the paper's lead author Charles Price, who worked on the project as a postdoctoral fellow at Georgia Tech. Price is now an assistant professor of plant biology at the University of Western Australia.

While the Georgia Tech research team is currently using the software to extract network and areole information from leaves imaged under a wide range of conditions, LEAF GUI could also be used for other purposes, such as leaf classification and description.

"Because the software and the underlying code are freely available, other investigators have the option of modifying methods as necessary to answer specific questions or improve upon current approaches," said Price.

LEAF GUI is not the only software program Weitz's group has developed to investigate the network characteristics of plants. In March 2010, Weitz's group co-authored another"Breakthrough Technologies" paper inPlant Physiologydetailing a way to analyze the complex root network structure of crop plants, with a focus on rice.

This work was performed in collaboration with Anjali Iyer-Pascuzzi, John Harer and Philip Benfey at Duke University and was supported by DARPA, the National Science Foundation and the Burroughs Welcome Fund.

"Both of these software programs are enabling tools in the growing field of 'plant phenomics,' which aims to correlate gene function, plant performance and response to the environment," noted Weitz."By identifying leaf vein characteristics and root structures that differ between plants, we are enabling advances in basic plant science and, in the case of crop plants, assisting researchers in identifying and potentially altering genes to improve plant health, yield and survival."

In addition to those already mentioned, Olga Symonova, Yuriy Mileyko and Troy Hilley also contributed to this work at Georgia Tech.

These projects were supported by the Defense Advanced Research Projects Agency (DARPA) (Award No. HR0011-05-1-0057), National Science Foundation (NSF Plant Genome Research Program Award Nos. 0606873 and 0820624) and Burroughs Wellcome Fund (BWF). The content is solely the responsibility of the principal investigator and does not necessarily represent the official views of DARPA, NSF or BWF.


Source

Friday, January 14, 2011

Fruit Fly Nervous System Provides New Solution to Fundamental Computer Network Problem

With a minimum of communication and without advance knowledge of how they are connected with each other, the cells in the fly's developing nervous system manage to organize themselves so that a small number of cells serve as leaders that provide direct connections with every other nerve cell, said author Ziv Bar-Joseph, associate professor of machine learning at Carnegie Mellon University.

The result, the researchers report in the Jan. 14 edition of the journalScience, is the same sort of scheme used to manage the distributed computer networks that perform such everyday tasks as searching the Web or controlling an airplane in flight. But the method used by the fly's nervous system to organize itself is much simpler and more robust than anything humans have concocted.

"It is such a simple and intuitive solution, I can't believe we did not think of this 25 years ago," said co-author Noga Alon, a mathematician and computer scientist at Tel Aviv University and the Institute for Advanced Study in Princeton, N.J.

Bar-Joseph, Alon and their co-authors -- Yehuda Afek of Tel Aviv University and Naama Barkai, Eran Hornstein and Omer Barad of the Weizmann Institute of Science in Rehovot, Israel -- used the insights gained from fruit flies to design a new distributed computing algorithm. They found it has qualities that make it particularly well suited for networks in which the number and position of the nodes is not completely certain. These include wireless sensor networks, such as environmental monitoring, where sensors are dispersed in a lake or waterway, or systems for controlling swarms of robots.

"Computational and mathematical models have long been used by scientists to analyze biological systems," said Bar-Joseph, a member of the Lane Center for Computational Biology in Carnegie Mellon's School of Computer Science."Here we've reversed the strategy, studying a biological system to solve a long-standing computer science problem."

Today's large-scale computer systems and the nervous system of a fly both take a distributive approach to performing tasks. Though the thousands or even millions of processors in a computing system and the millions of cells in a fly's nervous system must work together to complete a task, none of the elements need to have complete knowledge of what's going on, and the systems must function despite failures by individual elements.

In the computing world, one step toward creating this distributive system is to find a small set of processors that can be used to rapidly communicate with the rest of the processors in the network -- what graph theorists call a maximal independent set (MIS). Every processor in such a network is either a leader (a member of the MIS) or is connected to a leader, but the leaders are not interconnected.

A similar arrangement occurs in the fruit fly, which uses tiny bristles to sense the outside world. Each bristle develops from a nerve cell, called a sensory organ precursor (SOP), which connects to adjoining nerve cells, but does not connect with other SOPs.

For three decades, computer scientists have puzzled over how processors in a network can best elect an MIS. The common solutions use a probabilistic method -- similar to rolling dice -- in which some processors identify themselves as leaders, based in part on how many connections they have with other processors. Processors connected to these self-selected leaders take themselves out of the running and, in subsequent rounds, additional processors self-select themselves and the processors connected to them take themselves out of the running. At each round, the chances of any processor joining the MIS (becoming a leader) increases as a function of the number of its connections.

This selection process is rapid, Bar-Joseph said, but it entails lots of complicated messages being sent back and forth across the network, and it requires that all of the processors know in advance how they are connected in the network. That can be a problem for applications such as wireless sensor networks, where sensors might be distributed randomly and all might not be within communication range of each other.

During the larval and pupal stages of a fly's development, the nervous system also uses a probabilistic method to select the cells that will become SOPs. In the fly, however, the cells have no information about how they are connected to each other. As various cells self-select themselves as SOPs, they send out chemical signals to neighboring cells that inhibit those cells from also becoming SOPs. This process continues for three hours, until all of the cells are either SOPs or are neighbors to an SOP, and the fly emerges from the pupal stage.

In the fly, Bar-Joseph noted, the probability that any cell will self-select increases not as a function of connections, as in the typical MIS algorithm for computer networks, but as a function of time. The method does not require advance knowledge of how the cells are arranged. The communication between cells is as simple as can be.

The researchers created a computer algorithm based on the fly's approach and proved that it provides a fast solution to the MIS problem."The run time was slightly greater than current approaches, but the biological approach is efficient and more robust because it doesn't require so many assumptions," Bar-Joseph said."This makes the solution applicable to many more applications."

This research was supported in part by grants from the National Institutes of Health and the National Science Foundation.


Source

Wednesday, January 12, 2011

Quantum Quirk Contained

"We have demonstrated, for the first time, that a crystal can store information encoded into entangled quantum states of photons," says paper co-author Dr. Wolfgang Tittel of the University of Calgary's Institute for Quantum Information Science."This discovery constitutes an important milestone on the path toward quantum networks, and will hopefully enable building quantum networks in a few years."

In current communication networks, information is sent through pulses of light moving through optical fibre. The information can be stored on computer hard disks for future use.

Quantum networks operate differently than the networks we use daily.

"What we have is similar but it does not use pulses of light," says Tittel, who is a professor in the Department of Physics and Astronomy at the University of Calgary."In quantum communication, we also have to store and retrieve information. But in our case, the information is encoded into entangled states of photons."

In this state, photons are"entangled," and remain so even when they fly apart. In a way, they communicate with each other even when they are very far apart. The difficulty is getting them to stay put without breaking this fragile quantum link.

To achieve this task, the researchers used a crystal doped with rare-earth ions and cooled it to -270 Celsius. At these temperatures, material properties change and allowed the researchers to store and retrieve these photons without measurable degradation.

An important feature is that this memory device uses almost entirely standard fabrication technologies."The resulting robustness, and the possibility to integrate the memory with current technology such as fibre-optic cables is important when moving the currently fundamental research towards applications."

Quantum networks will allow the sending of information without one being afraid of somebody listening in.

"The results show that entanglement, a quantum physical property that has puzzled philosophers and physicists since almost hundred years, is not as fragile as is generally believed," says Tittel.


Source

Tuesday, January 11, 2011

Played by Humans, Scored by Nature, Online Game Helps Unravel Secrets of RNA

The game, called EteRNA harnesses game play to uncover principles for designing molecules of RNA, which biologists believe may be the key regulator of everything that happens in living cells. But the game doesn't end with the highest computer score. Rather, players are scored and ranked based on how well their virtual designs can be rendered as real, physical molecules. Each week's top designs are synthesized in a biochemistry laboratory so researchers can see if the resulting molecules fold themselves into the three-dimensional shapes predicted by computer models.

"Putting a ball through a hoop or drawing a better poker hand is the way we're used to winning games, but in EteRNA you score when the molecule you've designed can assemble itself," said Adrien Treuille, an assistant professor of computer science at Carnegie Mellon, who leads the EteRNA project with Rhiju Das, an assistant professor of biochemistry at Stanford."Nature provides the final score -- and nature is one tough umpire."

Because EteRNA is crowdsourcing the scientific method -- enlisting non-experts to uncover still-mysterious RNA design principles -- it is essential that scoring be rigorous.

"Nature confounds even our best computer models," said Jeehyung Lee, a computer science Ph.D. student at Carnegie Mellon who led the game's development."We knew that if we were to truly tap the wisdom of crowds, our game would have to expose players to every aspect of the scientific process: design, yes, but also experimentation, analysis of results and incorporation of those results into future designs."

The complex, three-dimensional shape of an RNA molecule is critical to its function. The goal of the EteRNA project is to design RNA knots, polyhedra and other shapes never seen before.

"We want to understand how RNA folds in a test tube and eventually in viruses and living cells," Das said."We also want to create a toolkit of basic building blocks that could be used to construct sensors, therapeutic agents and tiny machines."

By synthesizing a design generated by game play, researchers will learn quickly whether the resulting molecule folds into the predicted shape, or something close to it, or if it even folds at all. Even designs that are not synthesized will be scored by nature, in that their scores will be based on the performance of similar designs previously synthesized.

"These experiments are the first-line strategy for validating a design and a crucial part of the scientific method," said Das, whose lab at Stanford synthesizes the molecules."This makes EteRNA similar to what goes on in my lab on a daily basis: You make a prediction, do an experiment, make adjustments and start again." Initially, Das' lab is synthesizing eight designs each week, but is ramping up to synthesize about 100 a week.

RNA, or ribonucleic acid, long has been recognized as a messenger for genetic information, yet its role usually was overshadowed by DNA, which encodes genes, and by proteins, which do the work of the cell. But biologists now suspect RNA plays a much broader role as the regulator of cells, acting much like the operating system of a computer. Understanding RNA design could prove useful for treating or controlling such diseases as HIV, for creating RNA-based sensors and even for building computers out of RNA.

The game employs state-of-the-art simulation software that players use to generate designs. It includes training exercises and challenge puzzles for honing skills, as well as challenges for designing molecules that will be synthesized.

In its use of game play to generate results of scientific interest, EteRNA is similar to other online games such as Foldit, an online protein-folding game that Treuille helped create while at the University of Washington. In fact, Treuille and Das met when they sat at adjacent desks in the Washington biochemistry lab of David Baker, where Treuille was working on Foldit and Das was studying RNA and protein folding and occasionally offering advice.

Both men recognized that the lack of real-world feedback was a limitation of these games. They realized an RNA design game could solve this problem because RNA, unlike many biological molecules, can be readily synthesized in a matter of hours.

RNA consists of long, double strands of four bases -- adenine, guanine, cytosine and uracil -- with the shape determined by the sequence of the bases. The rules controlling shape are relatively simple, but the sheer size of the molecules greatly complicates the design process.

"We've already found it's better not to use regularly repeating sequences of bases because they prove unstable," Treuille said, based on play by beta testers."We're trying to build things that work in nature, and nature favors solutions that are robust."

The game is integrated with Facebook, so players can post accomplishments to their Facebook wall automatically and can create groups that talk about play and compete with each other.

The first challenges are relatively simple, arbitrary shapes, Das said, but will soon begin to incorporate designs of scientific relevance, such as RNA switches that could be used to sense and respond to other molecules in living cells.

Ultimately, players may end up creating designs and making discoveries of their own."They're already beginning to act like a scientific community," Treuille said."One player solved a puzzle that a widely used algorithm could not. Another player has written a strategy guide that proposes an algorithm for solving design problems that is different and simpler than anything in the scientific literature."

The EteRNA project is funded by a grant from the National Science Foundation.

For more information on EterRNA watch these video clips:


Source