Wednesday, March 9, 2011

Extremely Fast Magnetic Random Access Memory (MRAM) Computer Data Storage Within Reach

An invention made by the Physikalisch-Technische Bundesanstalt (PTB) changes this situation: A special chip connection, in association with dynamic triggering of the component, reduces the response from -- so far -- 2 ns to below 500 ps. This corresponds to a data rate of up to 2 GBit (instead of the approx. 400 MBit so far). Power consumption and the thermal load will be reduced, as well as the bit error rate. The European patent is just being granted this spring; the US patent was already granted in 2010. An industrial partner for further development and manufacturing such MRAMs under licence is still being searched for.

Fast computer storage chips like DRAM and SRAM (Dynamic and Static Random Access Memory) which are commonly used today, have one decisive disadvantage: in the case of an interruption of the power supply, the information stored on them is irrevocably lost. The MRAM promises to put an end to this. In the MRAM, the digital information is not stored in the form of an electric charge, but via the magnetic alignment of storage cells (magnetic spins). MRAMs are very universal storage chips because they allow -- in addition to the non-volatile information storage -- also faster access, a high integration density and an unlimited number of writing and reading cycles.

However, the current MRAM models are not yet fast enough to outperform the best competitors. The time for programming a magnetic bit amounts to approx. 2 ns. Whoever wants to speed this up, reaches certain limits which have something to do with the fundamental physical properties of magnetic storage cells: during the programming process, not only the desired storage cell is magnetically excited, but also a large number of other cells. These excitations -- the so-called magnetic ringing -- are only slightly attenuated, their decay can take up to approx. 2 ns, and during this time, no other cell of the MRAM chip can be programmed. As a result, the maximum clock rate of MRAM is, so far, limited to approx. 400 MHz.

Until now, all experiments made to increase the velocity have led to intolerable write errors. Now, PTB scientists have optimized the MRAM design and integrated the so-called ballistic bit triggering which has also been developed at PTB. Here, the magnetic pulses which serve for the programming are selected in such a skilful way that the other cells in the MRAM are hardly magnetically excited at all. The pulse ensures that the magnetization of a cell which is to be switched performs half a precision rotation (180°), while a cell whose storage state is to remain unchanged performs a complete precision rotation (360°). In both cases, the magnetization is in the state of equilibrium after the magnetic pulse has decayed, and magnetic excitations do not occur any more.

This optimal bit triggering also works with ultra-short switching pulses with a duration below 500 ps. The maximum clock rates of the MRAM are, therefore, above 2 GHz. In addition, several bits can be programmed at the same time which would allow the effective write rate per bit to be increased again by more than one order. This invention allows clock rates to be achieved with MRAM which can compete with those of the fastest volatile storage components.


Source

Saturday, March 5, 2011

Human Cues Used to Improve Computer User-Friendliness

"Our research in computer graphics and computer vision tries to make using computers easier," says the Binghamton University computer scientist."Can we find a more comfortable, intuitive and intelligent way to use the computer? It should feel like you're talking to a friend. This could also help disabled people use computers the way everyone else does."

Yin's team has developed ways to provide information to the computer based on where a user is looking as well as through gestures or speech. One of the basic challenges in this area is"computer vision." That is, how can a simple webcam work more like the human eye? Can camera-captured data understand a real-world object? Can this data be used to"see" the user and"understand" what the user wants to do?

To some extent, that's already possible. Witness one of Yin's graduate students giving a PowerPoint presentation and using only his eyes to highlight content on various slides. When Yin demonstrated this technology for Air Force experts last year, the only hardware he brought was a webcam attached to a laptop computer.

Yin says the next step would be enabling the computer to recognize a user's emotional state. He works with a well-established set of six basic emotions -- anger, disgust, fear, joy, sadness, and surprise -- and is experimenting with different ways to allow the computer to distinguish among them. Is there enough data in the way the lines around the eyes change? Could focusing on the user's mouth provide sufficient clues? What happens if the user's face is only partially visible, perhaps turned to one side?

"Computers only understand zeroes and ones," Yin says."Everything is about patterns. We want to find out how to recognize each emotion using only the most important features."

He's partnering with Binghamton University psychologist Peter Gerhardstein to explore ways this work could benefit children with autism. Many people with autism have difficulty interpreting others' emotions; therapists sometimes use photographs of people to teach children how to understand when someone is happy or sad and so forth. Yin could produce not just photographs, but three-dimensional avatars that are able to display a range of emotions. Given the right pictures, Yin could even produce avatars of people from a child's family for use in this type of therapy.

Yin and Gerhardstein's previous collaboration led to the creation of a 3D facial expression database, which includes 100 subjects with 2,500 facial expression models. The database is available at no cost to the nonprofit research community and has become a worldwide test bed for those working on related projects in fields such as biomedicine, law enforcement and computer science.

Once Yin became interested in human-computer interaction, he naturally grew more excited about the possibilities for artificial intelligence.

"We want not only to create a virtual-person model, we want to understand a real person's emotions and feelings," Yin says."We want the computer to be able to understand how you feel, too. That's hard, even harder than my other work."

Imagine if a computer could understand when people are in pain. Some may ask a doctor for help. But others -- young children, for instance -- cannot express themselves or are unable to speak for some reason. Yin wants to develop an algorithm that would enable a computer to determine when someone is in pain based just on a photograph.

Yin describes that health-care application and, almost in the next breath, points out that the same system that could identify pain might also be used to figure out when someone is lying. Perhaps a computer could offer insights like the ones provided by Tim Roth's character, Dr. Cal Lightman, on the television show Lie to Me. The fictional character is a psychologist with an expertise in tracking deception who often partners with law-enforcement agencies.

"This technology," Yin says,"could help us to train the computer to do facial-recognition analysis in place of experts."


Source

Friday, March 4, 2011

Method Developed to Match Police Sketch, Mug Shot: Algorithms and Software Will Match Sketches With Mugshots in Police Databases

A team led by MSU University Distinguished Professor of Computer Science and Engineering Anil Jain and doctoral student Brendan Klare has developed a set of algorithms and created software that will automatically match hand-drawn facial sketches to mug shots that are stored in law enforcement databases.

Once in use, Klare said, the implications are huge.

"We're dealing with the worst of the worst here," he said."Police sketch artists aren't called in because someone stole a pack of gum. A lot of time is spent generating these facial sketches so it only makes sense that they are matched with the available technology to catch these criminals."

Typically, artists' sketches are drawn by artists from information obtained from a witness. Unfortunately, Klare said,"often the facial sketch is not an accurate depiction of what the person looks like."

There also are few commercial software programs available that produce sketches based on a witness' description. Those programs, however, tend to be less accurate than sketches drawn by a trained forensic artist.

The MSU project is being conducted in the Pattern Recognition and Image Processing lab in the Department of Computer Science and Engineering. It is the first large-scale experiment matching operational forensic sketches with photographs and, so far, results have been promising.

"We improved significantly on one of the top commercial face-recognition systems," Klare said."Using a database of more than 10,000 mug shot photos, 45 percent of the time we had the correct person."

All of the sketches used were from real crimes where the criminal was later identified.

"We don't match them pixel by pixel," said Jain, director of the PRIP lab."We match them up by finding high-level features from both the sketch and the photo; features such as the structural distribution and the shape of the eyes, nose and chin."

This project and its results appear in the March 2011 issue of the journalIEEE Transactions on Pattern Analysis and Machine Intelligence.

The MSU team plans to field test the system in about a year.

The sketches used in this research were provided by forensic artists Lois Gibson and Karen Taylor, and forensic sketch artists working for the Michigan State Police.


Source

Wednesday, March 2, 2011

New Software 'Lowers the Stress' on Materials Problems

The software package, OOF (Object-Oriented Finite element analysis) is a specialized tool to help materials designers understand how stress and other factors act on a material with a complex internal structure, as is the case with many alloys and ceramics. As its starting point, OOF uses micrographs -- images of a material taken by a microscope. At the simplest level, OOF is designed to answer questions like,"I know what this material looks like and what it's made of, but I wonder what would happen if I pull on it in different ways?" or"I have a picture of this stuff and I know that different parts expand more than others as temperature increases -- I wonder where the stresses are greatest?"

OOF has been available in previous versions since 1998, but the new version (2.1) that the NIST team released on Feb. 16, 2011, adds a number of improvements. According to team member Stephen Langer, version 2.1 is the first dramatic extension of the original capabilities of the software.

"Version 2.1 greatly improves OOF's ability to envision 'non-linear' behavior, such as large-scale deformation, which plays a significant role in many types of stress response," says Langer."It also allows users to analyze a material's performance over time, not just under static conditions as was the case previously."

Jet turbine blades, for example, can spin more efficiently with a layer of ceramic material sprayed onto their surfaces, but the ceramic layers are brittle. Knowing how these ceramic layers will respond as the metal blades heat up and expand over time is one of the many problems OOF 2.1 is designed to help solve.

"We've also included templates programmers can use to plug in their own details and formulas describing a particular substance," Langer says."We're trying to make it easy for users to test anything -- we're not concentrating on any particular type of material."

Later this year, the team expects to enable users to analyze three-dimensional micrographs of a material, rather than the 2-D"slices" that can be analyzed at this point.

* OOF is available for free download athttp://www.ctcms.nist.gov/oof/oof2/. The package runs on Unix™ like systems, including Linux, OS X and Linux-like environments within Windows.


Source