Friday, December 31, 2010

New Cognitive Robotics Lab Tests Theories of Human Thought

"The real world has a lot of inconsistency that humans handle almost without noticing -- for example, we walk on uneven terrain, we see in shifting light," said Professor Vladislav Daniel Veksler, who is currently teaching Cognitive Robotics."With robots, we can see the problems humans face when navigating their environment."

Cognitive Robotics marries the study of cognitive science -- how the brain represents and transforms information -- with the challenges of a physical environment. Advances in cognitive robotics transfer to artificial intelligence, which seeks to develop more efficient computer systems patterned on the versatility of human thought.

Professor Bram Van Heuveln, who organized the lab, said cognitive scientists have developed a suite of elements -- perception/action, planning, reasoning, memory, decision-making -- that are believed to constitute human thought. When properly modeled and connected, those elements are capable of solving complex problems without the raw power required by precise mathematical computations.

"Suppose we wanted to build a robot to catch fly balls in an outfield. There are two approaches: one uses a lot of calculations -- Newton's law, mechanics, trigonometry, calculus -- to get the robot to be in the right spot at the right time," said Van Heuveln."But that's not the way humans do it. We just keep moving toward the ball. It's a very simple solution that doesn't involve a lot of computation but it gets the job done."

Robotics are an ideal testing ground for that principle because robots act in the real world, and a correct cognitive solution will withstand the unexpected variables presented by the real world.

"The physical world can help us to drive science because it's different from any simulated world we could come up with -- the camera shakes, the motors slip, there's friction, the light changes," Veksler said."This platform -- robotics -- allows us to see that you can't rely on calculations. You have to be adaptive."

The lab is open to all students at Rensselaer. In its first semester, the lab has largely attracted computer science and cognitive science students enrolled in a Cognitive Robotics course taught by Veksler, but Veksler and Van Heuveln hope it will attract more engineering and art students as word of the facility spreads.

"We want different students together in one space -- a place where we can bring the different disciplines and perspectives together," said Van Heuveln."I would like students to use this space for independent research: they come up with the research project, they say 'let's look at this.'"

The lab is equipped with five"Create" robots -- essentially a Roomba robotic vacuum cleaner paired with a laptop; three hand-eye systems; one Chiara (which looks like a large metal crab); and 10 LEGO robots paired with the Sony Handy Board robotic controller.

On a recent day, Jacqui Brunelli and Benno Lee were working on their robot"cat" and"mouse" pair, which try to chase and evade each other respectively; Shane Reilly was improving the computer"vision" of his robotic arm; and Ben Ball was programming his robot to maintain a fixed distance from a pink object waved in front of its"eye."

"The thing that I've learned is that the sensor data isn't exact -- what it 'sees' constantly changes by a few pixels -- and to try to go by that isn't going to work," said Ball, a junior and student of computer science and physics.

Ball said he is trying to pattern his robot on a more human approach.

"We don't just look at an object and walk toward it. We check our position, adjusting our course," Ball said."I need to devise an iterative approach where the robot looks at something, then moves, then looks again to check its results."

The work of the students, who program their robots with the Tekkotsu open-source software, could be applied in future projects, said Van Heuveln.

"As a cognitive scientist, I want this to be built on elements that are cognitively plausible and that are recyclable -- parts of cognition that I can apply to other solutions as well," said Van Heuveln."To me, that's a heck of a lot more interesting than the computational solution."

In a generic domain, their early investigations clearly show how a more cognitive approach employing limited resources can easily outpace more powerful computers using a brute force approach, said Veksler.

"We look to humans not just because we want to simulate what we do, which is an interesting problem in itself, but also because we're smart," said Veksler."Some of the things we have, like limited working memory -- which may seem like a bad thing -- are actually optimal for solving problems in our environment. If you remembered everything, how would you know what's important?"


Source

Thursday, November 25, 2010

Updated Software Uses Combination Testing to Catch Bugs Fast and Easy

Catching software"bugs" before a program is released enhances computer security because hackers often exploit these flaws to introduce malware, including viruses, to disrupt or take control of computer systems. But it's difficult. A widely cited 2002 study prepared for NIST* reported that even though 50 percent of software development budgets go to testing, flaws in software still cost the U.S. economy$59.5 billion annually.

Exhaustive checking of all possible combinations of input actions that could cause software failure is not practical, explained NIST's Raghu Kacker, because of the huge number of possibilities, but it's also not necessary. Based on studies of software crashes in applications, including medical devices and Web browsers, NIST's Rick Kuhn and other researchers determined that between 70 and 95 percent of software failures are triggered by only two variables interacting and practically 100 percent of software failures are triggered by no more than six."Testing every combination up to six variables can be as good as exhaustive testing," said Kacker.

Working with researcher Jeff Yu Lei and his students from the University of Texas at Arlington, NIST designed Advanced Combinatorial Testing System (ACTS), a freely distributed software tool to generate plans for efficiently testing combinations of two to six interacting variables. The method goes beyond the commonly used"pairwise" approach to software testing, which tests combinations of two variables, so it can detect more obscure flaws. 

Kuhn describes the process"as packing as many combinations into a set of tests as efficiently as we know how." For example, testing all possible interactions for a product with 34 on and off switches would require 17 billion tests. Using ACTS, all three-way interactions can be evaluated using only 33 tests and all six-way combinations with just 522 tests, instead of 17 billion.

The first version of ACTS was released in 2008. Since then, it has been distributed at no cost to 465 organizations and individuals in industry, academia and government."About half of our users are in IT, but other heavy users are in the financial, defense and telecommunications sectors," said Kuhn. In August, NIST and Lockheed Martin initiated a Cooperative Research and Development Agreement to study the application of ACTS in the company's large and complex software applications. The two groups will jointly publish the results.

NIST released the latest update of ACTS in October. The new version includes an improved user interface and a better method of specifying relationships between parameters for testing. This can eliminate the problem, for example, of spending time on tests for invalid combinations, such as using Internet Explorer on a Linux system.

Just released is a new tutorial, Practical Combinatorial Testing, that introduces key concepts and methods along with explaining the use of software tools for generating combinatorial tests. Cost and other practical considerations are addressed. The tutorial is designed to be accessible to undergraduate students in computer science or engineering and includes extensive references. NIST Special Publication 800-142 is available athttp://csrc.nist.gov/groups/SNS/acts/documents/SP800-142-101006.pdf

* Research Triangle Institute, The Economic Impacts of Inadequate Infrastructure for Software Testing, NIST Planning Report 02-3, May 2002.


Source

Wednesday, November 24, 2010

New Standard Proposed for Supercomputing

The rating system, Graph500, tests supercomputers for their skill in analyzing large, graph-based structures that link the huge numbers of data points present in biological, social and security problems, among other areas.

"By creating this test, we hope to influence computer makers to build computers with the architecture to deal with these increasingly complex problems," Sandia researcher Richard Murphy said.

Rob Leland, director of Sandia's Computations, Computers, and Math Center, said,"The thoughtful definition of this new competitive standard is both subtle and important, as it may heavily influence computer architecture for decades to come."

The group isn't trying to compete with Linpack, the current standard test of supercomputer speed, Murphy said."There have been lots of attempts to supplant it, and our philosophy is simply that it doesn't measure performance for the applications we need, so we need another, hopefully complementary, test," he said.

Many scientists view Linpack as a"plain vanilla" test mechanism that tells how fast a computer can perform basic calculations, but has little relationship to the actual problems the machines must solve.

The impetus to achieve a supplemental test code came about at"an exciting dinner conversation at Supercomputing 2009," said Murphy."A core group of us recruited other professional colleagues, and the effort grew into an international steering committee of over 30 people." 

Many large computer makers have indicated interest, said Murphy, adding there's been buy-in from Intel, IBM, AMD, NVIDIA, and Oracle corporations."Whether or not they submit test results remains to be seen, but their representatives are on our steering committee."

Each organization has donated time and expertise of committee members, he said.

While some computer makers and their architects may prefer to ignore a new test for fear their machine will not do well, the hope is that large-scale demand for a more complex test will be a natural outgrowth of the greater complexity of problems.

Studies show that moving data around (not simple computations) will be the dominant energy problem on exascale machines, the next frontier in supercomputing, and the subject of a nascent U.S. Department of Energy initiative to achieve this next level of operations within a decade, Leland said. (Petascale and exascale represent 10 to the 15thand 18thpowers, respectively, operations per second.)

Part of the goal of the Graph500 list is to point out that in addition to more expense in data movement, any shift in application base from physics to large-scale data problems is likely to further increase the application requirements for data movement, because memory and computational capability increase proportionally. That is, an exascale computer requires an exascale memory.

"In short, we're going to have to rethink how we build computers to solve these problems, and the Graph500 is meant as an early stake in the ground for these application requirements," said Murphy.

How does it work?

Large data problems are very different from ordinary physics problems.

Unlike a typical computation-oriented application, large-data analysis often involves searching large, sparse data sets performing very simple computational operations.

To deal with this, the Graph 500 benchmark creates two computational kernels: a large graph that inscribes and links huge numbers of participants and a parallel search of that graph.

"We want to look at the results of ensembles of simulations, or the outputs of big simulations in an automated fashion," Murphy said."The Graph500 is a methodology for doing just that. You can think of them being complementary in that way -- graph problems can be used to figure out what the simulation actually told us."

Performance for these applications is dominated by the ability of the machine to sustain a large number of small, nearly random remote data accesses across its memory system and interconnects, as well as the parallelism available in the machine.

Five problems for these computational kernels could be cybersecurity, medical informatics, data enrichment, social networks and symbolic networks:

  • Cybersecurity: Large enterprises may create 15 billion log entries per day and require a full scan.
  • Medical informatics: There are an estimated 50 million patient records, with 20 to 200 records per patient, resulting in billions of individual pieces of information, all of which need entity resolution: in other words, which records belong to her, him or somebody else.
  • Data enrichment: Petascale data sets include maritime domain awareness with hundreds of millions of individual transponders, tens of thousands of ships, and tens of millions of pieces of individual bulk cargo. These problems also have different types of input data.
  • Social networks: Almost unbounded, like Facebook.
  • Symbolic networks: Often petabytes in size. One example is the human cortex, with 25 billion neurons and approximately 7,000 connections each.

"Many of us on the steering committee believe that these kinds of problems have the potential to eclipse traditional physics-based HPC {high performance computing} over the next decade," Murphy said.

While general agreement exists that complex simulations work well for the physical sciences, where lab work and simulations play off each other, there is some doubt they can solve social problems that have essentially infinite numbers of components. These include terrorism, war, epidemics and societal problems.

"These are exactly the areas that concern me," Murphy said."There's been good graph-based analysis of pandemic flu. Facebook shows tremendous social science implications. Economic modeling this way shows promise.

"We're all engineers and we don't want to over-hype or over-promise, but there's real excitement about these kinds of big data problems right now," he said."We see them as an integral part of science, and the community as a whole is slowly embracing that concept.

"However, it's so new we don't want to sound as if we're hyping the cure to all scientific ills. We're asking, 'What could a computer provide us?' and we know we're ignoring the human factors in problems that may stump the fastest computer. That'll have to be worked out."


Source

Tuesday, November 23, 2010

Supercomputing Center Breaks the Petaflops Barrier

NERSC's newest supercomputer, a 153,408 processor-core Cray XE6 system, posted a performance of 1.05 petaflops (quadrillions of calculations per second) running the Linpack benchmark. In keeping with NERSC's tradition of naming computers for renowned scientists, the system is named Hopper in honor of Admiral Grace Hopper, a pioneer in software development and programming languages.

NERSC serves one of the largest research communities of all supercomputing centers in the United States. The center's supercomputers are used to tackle a wide range of scientific challenges, including global climate change, combustion, clean energy, new materials, astrophysics, genomics, particle physics and chemistry. The more than 400 projects being addressed by NERSC users represent the research mission areas of DOE's Office of Science.

The increasing power of supercomputers helps scientists study problems in greater detail and with greater accuracy, such as increasing the resolution of climate models and creating models of new materials with thousands of atoms. Supercomputers are increasingly used to compliment scientific experimentation by allowing researchers to test theories using computational models and analyzed large scientific data sets. NERSC is also home to Franklin, a 38,128 core Cray XT4 supercomputer with a Linpack performance of 266 teraflops (trillions of calculations per second). Franklin is ranked number 27 on the newest TOP500 list.

The system, installed d in September 2010, is funded by DOE's Office of Advanced Scientific Computing Research.


Source

Physicists Demonstrate a Four-Fold Quantum Memory

Their work, described in the November 18 issue of the journalNature,also demonstrated a quantum interface between the atomic memories -- which represent something akin to a computer"hard drive" for entanglement -- and four beams of light, thereby enabling the four-fold entanglement to be distributed by photons across quantum networks. The research represents an important achievement in quantum information science by extending the coherent control of entanglement from two to multiple (four) spatially separated physical systems of matter and light.

The proof-of-principle experiment, led by William L. Valentine Professor and professor of physics H. Jeff Kimble, helps to pave the way toward quantum networks. Similar to the Internet in our daily life, a quantum network is a quantum"web" composed of many interconnected quantum nodes, each of which is capable of rudimentary quantum logic operations (similar to the"AND" and"OR" gates in computers) utilizing"quantum transistors" and of storing the resulting quantum states in quantum memories. The quantum nodes are"wired" together by quantum channels that carry, for example, beams of photons to deliver quantum information from node to node. Such an interconnected quantum system could function as a quantum computer, or, as proposed by the late Caltech physicist Richard Feynman in the 1980s, as a"quantum simulator" for studying complex problems in physics.

Quantum entanglement is a quintessential feature of the quantum realm and involves correlations among components of the overall physical system that cannot be described by classical physics. Strangely, for an entangled quantum system, there exists no objective physical reality for the system's properties. Instead, an entangled system contains simultaneously multiple possibilities for its properties. Such an entangled system has been created and stored by the Caltech researchers.

Previously, Kimble's group entangled a pair of atomic quantum memories and coherently transferred the entangled photons into and out of the quantum memories. For such two-component -- or bipartite -- entanglement, the subsystems are either entangled or not. But for multi-component entanglement with more than two subsystems -- or multipartite entanglement -- there are many possible ways to entangle the subsystems. For example, with four subsystems, all of the possible pair combinations could be bipartite entangled but not be entangled over all four components; alternatively, they could share a"global" quadripartite (four-part) entanglement.

Hence, multipartite entanglement is accompanied by increased complexity in the system. While this makes the creation and characterization of these quantum states substantially more difficult, it also makes the entangled states more valuable for tasks in quantum information science.

To achieve multipartite entanglement, the Caltech team used lasers to cool four collections (or ensembles) of about one million Cesium atoms, separated by 1 millimeter and trapped in a magnetic field, to within a few hundred millionths of a degree above absolute zero. Each ensemble can have atoms with internal spins that are"up" or"down" (analogous to spinning tops) and that are collectively described by a"spin wave" for the respective ensemble. It is these spin waves that the Caltech researchers succeeded in entangling among the four atomic ensembles.

The technique employed by the Caltech team for creating quadripartite entanglement is an extension of the theoretical work of Luming Duan, Mikhail Lukin, Ignacio Cirac, and Peter Zoller in 2001 for the generation of bipartite entanglement by the act of quantum measurement. This kind of"measurement-induced" entanglement for two atomic ensembles was first achieved by the Caltech group in 2005.

In the current experiment, entanglement was"stored" in the four atomic ensembles for a variable time, and then"read out" -- essentially, transferred -- to four beams of light. To do this, the researchers shot four"read" lasers into the four, now-entangled, ensembles. The coherent arrangement of excitation amplitudes for the atoms in the ensembles, described by spin waves, enhances the matter-light interaction through a phenomenon known as superradiant emission.

"The emitted light from each atom in an ensemble constructively interferes with the light from other atoms in the forward direction, allowing us to transfer the spin wave excitations of the ensembles to single photons," says Akihisa Goban, a Caltech graduate student and coauthor of the paper. The researchers were therefore able to coherently move the quantum information from the individual sets of multipartite entangled atoms to four entangled beams of light, forming the bridge between matter and light that is necessary for quantum networks.

The Caltech team investigated the dynamics by which the multipartite entanglement decayed while stored in the atomic memories."In the zoology of entangled states, our experiment illustrates how multipartite entangled spin waves can evolve into various subsets of the entangled systems over time, and sheds light on the intricacy and fragility of quantum entanglement in open quantum systems," says Caltech graduate student Kyung Soo Choi, the lead author of the Nature paper. The researchers suggest that the theoretical tools developed for their studies of the dynamics of entanglement decay could be applied for studying the entangled spin waves in quantum magnets.

Further possibilities of their experiment include the expansion of multipartite entanglement across quantum networks and quantum metrology."Our work introduces new sets of experimental capabilities to generate, store, and transfer multipartite entanglement from matter to light in quantum networks," Choi explains."It signifies the ever-increasing degree of exquisite quantum control to study and manipulate entangled states of matter and light."

In addition to Kimble, Choi, and Goban, the other authors of the paper are Scott Papp, a former postdoctoral scholar in the Caltech Center for the Physics of Information now at the National Institute of Standards and Technology in Boulder, Colorado, and Steven van Enk, a theoretical collaborator and professor of physics at the University of Oregon, and an associate of the Institute for Quantum Information at Caltech.

This research was funded by the National Science Foundation, the National Security Science and Engineering Faculty Fellowship program at the U.S. Department of Defense (DOD), the Northrop Grumman Corporation, and the Intelligence Advanced Research Projects Activity.


Source