Thursday, February 24, 2011

A Semantic Sommelier: Wine Application Highlights the Power of Web 3.0

Web scientist and Rensselaer Polytechnic Institute Tetherless World Research Constellation Professor Deborah McGuinness has been developing a family of applications for the most tech-savvy wine connoisseurs since her days as a graduate student in the 1980s -- before what we now know as the World Wide Web had even been envisioned.

Today, McGuinness is among the world's foremost experts in Web ontology languages. These languages are used to encode meanings in a language that computers can understand. The most recent version of her wine application serves as an exceptional example of what the future of the World Wide Web, often called Web 3.0, might in fact look like. It is also an exceptional tool for teaching future Web Scientists about ontologies.

"The wine agent came about because I had to demonstrate the new technology that I was developing," McGuinness said."I had sophisticated applications that used cutting-edge artificial intelligence technology in domains, such as telecommunications equipment, that were difficult for anyone other than well-trained engineers to understand." McGuinness took the technology into the domain of wines and foods to create a program that she uses as a semantic tutorial, an"Ontologies 101" as she calls it. And students throughout the years have done many things with the wine agent including, most recently, experimentation with social media and mobile phone applications.

Today, the semantic sommelier is set to provide even the most novice of foodies some exciting new tools to expand their wine knowledge and food-pairing abilities on everything from their home PC to their smart phone. Evan Patton, a graduate student in computer science at Rensselaer, is the most recent student to tinker with the wine agent and is working with McGuinness to bring it into the mobile space on both the iPhone and Droid platforms.

The agent uses the Web Ontology Language (OWL), the formal language for the Semantic Web. Like the English language, which uses an agreed upon alphabet to form words and sentences that all English-speaking people can recognize, OWL uses a formalized set of symbols to create a code or language that a wide variety of applications can"read." This allows your computer to operate more efficiently and more intelligently with your cell phone or your Facebook page, or any other webpage or web-enabled device. These semantics also allow for an entirely new generation in smart search technologies.

Thanks to its semantic technology, the sommelier is input with basic background knowledge about wine and food. For wine, that includes its body, color (red versus white or blush), sweetness, and flavor. For food, this includes the course (e.g. appetizer versus entrée), ingredient type (e.g. fish versus meat), and its heat (mild versus spicy). The semantic technologies beneath the application then encode that knowledge and apply reasoning to search and share that information. This semantic functionality can now be exploited for a variety of culinary purposes, all of which McGuinness, a personal lover of fine wines, and Patton are working together on.

Having a spicy fish dish for dinner? Search within the system and it will arrive at a good wine pairing for the meal. Beyond basic pairings, the application has strong possibilities for use in individual restaurants, according to McGuinness, who envisions teaming up with restaurant owners to input their specific menus and wine lists. Thus, a diner could check menus and wine holdings before going out for dinner or they could enter a restaurant, pull out their smart phone, and instantly know what is in the wine cellar and goes best with that chef's clams casino. Beyond pairings, diners could rate different wines, providing fellow diners with personal reviews and the restaurateur with valuable information on what to stock up on next week. Is it a dry restaurant? The application could also be loaded up with the inventory within the liquor store down the street.

Beyond the table, the application can also be used to make personal wine suggestions and virtual wine cellars that you could share with your friends via Facebook or other social media platforms. It could also be used to manage a personal wine cellar, providing information on what is a peak flavor at the moment or what in your cellar would go best with your famous steak au poivre.

"Today we have 10 gadgets with us at any given time," McGuinness said."We live and breathe social media. With semantic technologies, we can offload more of the searching and reasoning required to locate and share information to the computer while still maintaining personal control over our information and how we use it. We also increase the ability of our technologies to interact with each other and decrease the need for as many gadgets or as many interactions with them since the applications do more work for us."


Source

Wednesday, February 23, 2011

Toward Computers That Fit on a Pen Tip: New Technologies Usher in the Millimeter-Scale Computing Era

And a compact radio that needs no tuning to find the right frequency could be a key enabler to organizing millimeter-scale systems into wireless sensor networks. These networks could one day track pollution, monitor structural integrity, perform surveillance, or make virtually any object smart and trackable.

Both developments at the University of Michigan are significant milestones in the march toward millimeter-scale computing, believed to be the next electronics frontier.

Researchers are presenting papers on each at the International Solid-State Circuits Conference (ISSCC) in San Francisco. The work is being led by three faculty members in the U-M Department of Electrical Engineering and Computer Science: professors Dennis Sylvester and David Blaauw, and assistant professor David Wentzloff.

Bell's Law and the promise of pervasive computing

Nearly invisible millimeter-scale systems could enable ubiquitous computing, and the researchers say that's the future of the industry. They point to Bell's Law, a corollary to Moore's Law. (Moore's says that the number of transistors on an integrated circuit doubles every two years, roughly doubling processing power.)

Bell's Law says there's a new class of smaller, cheaper computers about every decade. With each new class, the volume shrinks by two orders of magnitude and the number of systems per person increases. The law has held from 1960s' mainframes through the '80s' personal computers, the '90s' notebooks and the new millennium's smart phones.

"When you get smaller than hand-held devices, you turn to these monitoring devices," Blaauw said."The next big challenge is to achieve millimeter-scale systems, which have a host of new applications for monitoring our bodies, our environment and our buildings. Because they're so small, you could manufacture hundreds of thousands on one wafer. There could be 10s to 100s of them per person and it's this per capita increase that fuels the semiconductor industry's growth."

The first complete millimeter-scale system

Blaauw and Sylvester's new system is targeted toward medical applications. The work they present at ISSCC focuses on a pressure monitor designed to be implanted in the eye to conveniently and continuously track the progress of glaucoma, a potentially blinding disease. (The device is expected to be commercially available several years from now.)

In a package that's just over 1 cubic millimeter, the system fits an ultra low-power microprocessor, a pressure sensor, memory, a thin-film battery, a solar cell and a wireless radio with an antenna that can transmit data to an external reader device that would be held near the eye.

"This is the first true millimeter-scale complete computing system," Sylvester said.

"Our work is unique in the sense that we're thinking about complete systems in which all the components are low-power and fit on the chip. We can collect data, store it and transmit it. The applications for systems of this size are endless."

The processor in the eye pressure monitor is the third generation of the researchers' Phoenix chip, which uses a unique power gating architecture and an extreme sleep mode to achieve ultra-low power consumption. The newest system wakes every 15 minutes to take measurements and consumes an average of 5.3 nanowatts. To keep the battery charged, it requires exposure to 10 hours of indoor light each day or 1.5 hours of sunlight. It can store up to a week's worth of information.

While this system is miniscule and complete, its radio doesn't equip it to talk to other devices like it. That's an important feature for any system targeted toward wireless sensor networks.

A unique compact radio to enable wireless sensor networks

Wentzloff and doctoral student Kuo-Ken Huang have taken a step toward enabling such node-to-node communication. They've developed a consolidated radio with an on-chip antenna that doesn't need the bulky external crystal that engineers rely on today when two isolated devices need to talk to each other. The crystal reference keeps time and selects a radio frequency band. Integrating the antenna and eliminating this crystal significantly shrinks the radio system. Wentzloff's is less than 1 cubic millimeter in size.

He and Huang's key innovation is to engineer the new antenna to keep time on its own and serve as its own reference. By integrating the antenna through an advanced CMOS process, they can precisely control its shape and size and therefore how it oscillates in response to electrical signals.

"Antennas have a natural resonant frequency for electrical signals that is defined by their geometry, much like a pure audio tone on a tuning fork," Wentzloff said."By designing a circuit to monitor the signal on the antenna and measure how close it is to the antenna's natural resonance, we can lock the transmitted signal to the antenna's resonant frequency."

"This is the first integrated antenna that also serves as its own reference. The radio on our chip doesn't need external tuning. Once you deploy a network of these, they'll automatically align at the same frequency."

The researchers are now working on lowering the radio's power consumption so that it's compatible with millimeter-scale batteries.

Greg Chen, a doctoral student in the Department of Electrical Engineering and Computer Science, presents"A Cubic-Millimeter Energy-Autonomous Wireless Intraocular Pressure Monitor." The researchers are collaborating with Ken Wise, the William Gould Dow Distinguished University Professor of Electrical Engineering and Computer Science on the packaging of the sensor, and with Paul Lichter, chair of the Department of Ophthalmology and Visual Sciences at the U-M Medical School, for the implantation studies. Huang presents"A 60GHz Antenna-Referenced Frequency-Locked Loop in 0.13μm CMOS for Wireless Sensor Networks." This research is funded by the National Science Foundation. The university is pursuing patent protection for the intellectual property, and is seeking commercialization partners to help bring the technology to market.


Source

Thursday, February 17, 2011

Running on a Faster Track: Researchers Develop Scheduling Tool to Save Time on Public Transport

Dr. Tal Raviv and his graduate student Mor Kaspi of Tel Aviv University's Department of Industrial Engineering in the Iby and Aladar Fleischman Faculty of Engineering have developed a tool that makes passenger train journeys shorter, especially when transfers are involved -- a computer-based system to shave precious travel minutes off a passenger's journey.

Dr. Raviv's solution, the"Service Oriented Timetable," relies on computers and complicated algorithms to do the scheduling."Our solution is useful for any metropolitan region where passengers are transferring from one train to another, and where train service providers need to ensure that the highest number of travellers can make it from Point A to Point B as quickly as possible," says Dr. Raviv.

Saves time and resources

In the recent economic downturn, more people are seeking to scale back their monthly transportation costs. Public transportation is a win-win -- good for both the bank account and the environment. But when travel routes are complicated by transfers, it becomes a hard job to manage who can wait -- and who can't -- between trains.

Another factor is consumer preference. Ideally, each passenger would like a direct train to his destination, with no stops en route. But passengers with different itineraries must compete for the system's resources. Adding a stop at a certain station will improve service for passengers for whom the station is the final destination, but will cause a delay for passengers who are only passing through it. The question is how to devise a schedule which is fair for everyone. What are the decisions that will improve the overall condition of passengers in the train system?

It's not about adding more resources to the system, but more intelligently managing what's already there, Dr. Raviv explains.

More time on the train, less time on the platform

In their train timetabling system, Dr. Raviv and Kaspi study the timetables to find places in the train scheduling system that can be optimized so passengers make it to their final destination faster.

Traditionally, train planners looked for solutions based on the frequency of trains passing through certain stops. Dr. Raviv and Kaspi, however, are developing a high-tech solution for scheduling trains that considers the total travel time of passengers, including their waiting time at transfer stations.

"Let's say you commute to Manhattan from New Jersey every day. We can find a way to synchronize trains to minimize the average travel time of passengers," says Dr. Raviv."That will make people working in New York a lot happier."

The project has already been simulated on the Israel Railway, reducing the average travel time per commuter from 60 to 48 minutes. The tool can be most useful in countries and cities, he notes, where train schedules are robust and very complicated.

The researchers won a competition of the Railway Application Section of the International Institute for Operation Research and Management Science (INFORMS) last November for their computer program that optimizes a refuelling schedule for freight trains. Dr. Raviv also works on optimizing other forms of public transport, including the bike-sharing programs found in over 400 cities around the world today.


Source

Tuesday, February 15, 2011

Scientists Develop Control System to Allow Spacecraft to Think for Themselves

Professor Sandor Veres and his team of engineers have developed an artificially intelligent control system called 'sysbrain'.

Using natural language programming (NLP), the software agents can read special English language technical documents on control methods. This gives the vehicles advanced guidance, navigation and feedback capabilities to stop them crashing into other objects and the ability to adapt during missions, identify problems, carry out repairs and make their own decisions about how best to carry out a task.

Professor Veres, who is leading the EPSRC-funded project, says:"This is the world's first publishing system of technical knowledge for machines and opens the way for engineers to publish control instructions to machines directly. As well as spacecrafts and satellites, this innovative technology is transferable to other types of autonomous vehicles, such as autonomous underwater, ground and aerial vehicles."

To test the control systems that could be applied in a space environment, Professor Veres and his team constructed a unique test facility and a fleet of satellite models, which are controlled by the sysbrain cognitive agent control system.

The 'Autonomous Systems Testbed' consists of a glass covered precision level table, surrounded by a metal framework, which is used to mount overhead visual markers, observation cameras and isolation curtains to prevent any external light sources interfering with experimentation. Visual navigation is performed using onboard cameras to observe the overhead marker system located above the test area. This replicates how spacecraft would use points in the solar system to determine their orientation.

The perfectly-balanced model satellites, which rotate around a pivot point with mechanical properties similar to real satellites, are placed on the table and glide across it on roller bearings almost without friction to mimic the zero-gravity properties of space. Each model has eight propellers to control movement, a set of inertia sensors and additional cameras to be 'spatially aware' and to 'see' each other. The model's skeletal robot frame also allows various forms of hardware to be fitted and experimented with.

Professor Veres adds:"We have invented sysbrains to control intelligent machines. Sysbrain is a special breed of software agents with unique features such as natural language programming to create them, human-like reasoning, and most importantly they can read special English language documents in 'system English' or 'sEnglish'. Human authors of sEnglish documents can put them on the web as publications and sysbrain can read them to enhance their physical and problem solving skills. This allows engineers to write technical papers directly for sysbrain that control the machines."

Further information is available athttp://www.sesnet.soton.ac.uk/people/smv/avs_lab/index.htm.


Source

Monday, February 14, 2011

New Wireless Technology Developed for Faster, More Efficient Networks

"Wireless communication is a one-way street. Over."

Radio traffic can flow in only one direction at a time on a specific frequency, hence the frequent use of"over" by pilots and air traffic controllers, walkie-talkie users and emergency personnel as they take turns speaking.

But now, Stanford researchers have developed the first wireless radios that can send and receive signals at the same time.

This immediately makes them twice as fast as existing technology, and with further tweaking will likely lead to even faster and more efficient networks in the future.

"Textbooks say you can't do it," said Philip Levis, assistant professor of computer science and of electrical engineering."The new system completely reworks our assumptions about how wireless networks can be designed," he said.

Cell phone networks allow users to talk and listen simultaneously, but they use a work-around that is expensive and requires careful planning, making the technique less feasible for other wireless networks, including Wi-Fi.

Sparked from a simple idea

A trio of electrical engineering graduate students, Jung Il Choi, Mayank Jain and Kannan Srinivasan, began working on a new approach when they came up with a seemingly simple idea. What if radios could do the same thing our brains do when we listen and talk simultaneously: screen out the sound of our own voice?

In most wireless networks, each device has to take turns speaking or listening."It's like two people shouting messages to each other at the same time," said Levis."If both people are shouting at the same time, neither of them will hear the other."

It took the students several months to figure out how to build the new radio, with help from Levis and Sachin Katti, assistant professor of computer science and of electrical engineering.

Their main roadblock to two-way simultaneous conversation was this: Incoming signals are overwhelmed by the radio's own transmissions, making it impossible to talk and listen at the same time.

"When a radio is transmitting, its own transmission is millions, billions of times stronger than anything else it might hear {from another radio}," Levis said."It's trying to hear a whisper while you yourself are shouting."

But, the researchers realized, if a radio receiver could filter out the signal from its own transmitter, weak incoming signals could be heard."You can make it so you don't hear your own shout and you can hear someone else's whisper," Levis said.

Their setup takes advantage of the fact that each radio knows exactly what it's transmitting, and hence what its receiver should filter out. The process is analogous to noise-canceling headphones.

When the researchers demonstrated their device last fall at MobiCom 2010, an international gathering of more than 500 of the world's top experts in mobile networking, they won the prize for best demonstration. Until then, people didn't believe sending and receiving signals simultaneously could be done, Jain said. Levis said a researcher even told the students their idea was"so simple and effective, it won't work," because something that obvious must have already been tried unsuccessfully.

Breakthrough for communications technology

But work it did, with major implications for future communications networks. The most obvious effect of sending and receiving signals simultaneously is that it instantly doubles the amount of information you can send, Levis said. That means much-improved home and office networks that are faster and less congested.

But Levis also sees the technology having larger impacts, such as overcoming a major problem with air traffic control communications. With current systems, if two aircraft try to call the control tower at the same time on the same frequency, neither will get through. Levis says these blocked transmissions have caused aircraft collisions, which the new system would help prevent.

The group has a provisional patent on the technology and is working to commercialize it. They are currently trying to increase both the strength of the transmissions and the distances over which they work. These improvements are necessary before the technology is practical for use in Wi-Fi networks.

But even more promising are the system's implications for future networks. Once hardware and software are built to take advantage of simultaneous two-way transmission,"there's no predicting the scope of the results," Levis said.


Source

Thursday, February 3, 2011

Engineers Work to Increase the Speed and Accessibility of Future Wireless Systems

"The U.S. government has noted that broadband wireless access technologies are a key foundation for economic growth, job creation, global competitiveness, and a better way of life," explained Claudio da Silva, an assistant professor in Virginia Tech's Bradley Department of Electrical and Computer Engineering. He was referring to a recent report by the Federal Communications Commission on the need to ensure all Americans have access to broadband capability.

These spectrum-sensing technologies are envisioned to support high speed internet in rural areas, enable the creation of super Wi-Fi networks, and support the implementation of smart grid technologies. However, implementation of these technologies is seen as the"the greatest infrastructure challenge of the 21st century," according to the commission's report.

A major key to solving this challenge is in the design of wireless systems that more efficiently use the limited radio spectrum resources, said da Silva."As a means to achieve this goal, the U.S. government, through the Federal Communications Commission, has recently finalized rules to make the unused spectrum in the television band available to unlicensed broadband wireless systems. In these systems, devices first identify underutilized spectrum with the use of spectrum databases and/or spectrum sensing and then, following pre-defined rules, dynamically access the"best" frequency bands on an opportunistic and non-interfering basis."

"The U.S. government has plans to release even more spectrum for unlicensed broadband wireless access," added da Silva."While sensing is not a requirement for television band access, the Federal Communications Commission is encouraging the continued development of spectrum sensing techniques for potential use in these new bands."

"InterDigital's advanced wireless technology development efforts compliment this work at Virginia Tech," added James J. Nolan, InterDigital's executive vice-president of research and development."We see the evolution of wireless systems to dynamic spectrum management technologies as being key to solving the looming bandwidth supply-demand gap by more efficiently leveraging lightly used spectrum. These cognitive radio technologies are an integral part of our holistic bandwidth management strategy, and we have invested significantly in this area of research."

During the first phase of the study,"by exploiting location-dependent signal propagation characteristics, we have developed efficient sensing algorithms that enable a set of devices to work together to determine spectrum opportunities," said William Headley, of Ringgold, Va., one of the Ph.D. students working on this project.

For the second year of the study, the focus is changing to the design of spectrum sensing algorithms that are robust to both man-made noise and severe multipath fading."The vast majority of sensing algorithms were developed for channels in which the noise is a Gaussian process," said Gautham Chavali, of Blacksburg, Va., the second Ph.D. student working on this project."However, experimental studies have shown that the noise that appears in most radio channels is highly non-Gaussian," Chavali added.

"Man-made noise, which arises from incidental radiation of a wide range of electrical devices, for example, is partially responsible for this occurrence," Chavali said. In addition, the algorithms to be designed will not rely on the common, but impractical, assumption of perfect synchronization and equalization by the radio front-end, which is an important concern when dealing with realistic multipath fading channels, such as indoor environments.

InterDigital develops advanced wireless technologies that are at the core of mobile devices, networks, and services worldwide. Using a holistic approach to addressing the bandwidth crunch, the company is developing innovations in spectrum optimization, cross-network connectivity and mobility, and intelligent data. InterDigital has provided funding for this 30-month research project, including the donation of state of the art laboratory equipment that will support different wireless projects at Virginia Tech.


Source

Wednesday, February 2, 2011

Mathematical Model Could Help Predict and Prevent Future Extinctions

"Our study provides a theoretical basis for management efforts that would aim to mitigate extinction cascades in food web networks. There is evidence that a significant fraction of all extinctions are caused not by a primary perturbation but instead by the propagation of a cascade," said Motter.

Extinction cascades are often observed following the loss of a key species within an ecosystem. As the system changes to compensate for the loss, availability of food, territory and other resources to each of the remaining members can fluctuate wildly, creating a boom-or-bust environment that can lead to even more extinctions. According to the study, more than 70 percent of these extinctions are preventable, assuming that the system can be brought into balance using only available resources--no new factors may be introduced.

Motter explained further,"We find that extinction cascades can often be mitigated by suppressing--rather than enhancing--the populations of specific species. In numerous cases, it is predicted that even the proactive removal of a species that would otherwise be extinct by a cascade can prevent the extinction of other species."

The finding may seem counterintuitive to conservationists because the compensatory actions seem to inflict further damage to the system. However, when the entire ecosystem is considered, the effect is beneficial. This news holds promise for those charged with maintaining Earth's biodiversity and natural resources--the health of which can counteract many of the causes of climate change, and some man-made disasters such as the Gulf of Mexico oil spill.

The dodo bird,Raphus cucullatus, is one example of extinction due to human activity. The dodo was a large, flightless bird that became extinct in the 1600s. It is likely that a combination of factors including hunting, loss of habitat, and perhaps even a flash flood, stressed the ecosystem on the island of Mauritius, home of the dodo. Some researchers think that human introduction of non-native species, such as dogs, pigs, cats and rats to the island, is what ultimately lead to the demise of the dodo.

In any case, in the future, it may be possible to avoid extinction of some species in stressed ecosystems by applying the new method of analysis developed by Motter.

The goal of this project, funded by the National Science Foundation's Division of Mathematical Sciences, is to develop mathematical methods to study dynamical processes in complex networks. Although the specific application mentioned here may be useful in management of ecosystems, the mathematical foundation underlying the analysis is much more universal. The broad concept is innovative in the area of complex networks because it concludes that large-scale failures can be avoided by focusing on preventing the waves of failure that follow the initial event.

This approach could be used to stabilize a wide array of complex networks. It can apply to biochemical networks in order to slow or stop progression of diseases caused by variations inside individual cells. It can also be used to manage technological networks such as the smart grid to prevent blackouts. It can even apply to regulation of complicated financial networks by identifying key factors in the early stages of a financial downturn, which, when met with human intervention, could potentially save billions of dollars.

The world is a complicated place that gets even trickier when trying to mathematically explain a complex network, especially when the network evolves within an environment that is itself changing. But, Motter says his mathematical model is promising for the study of changing environments.

"Uncertainty itself is not a problem," he quipped."The problem comes when you cannot estimate uncertainty."


Source

Tuesday, February 1, 2011

Computer-Assisted Diagnosis Tools to Aid Pathologists

"The advent of digital whole-slide scanners in recent years has spurred a revolution in imaging technology for histopathology," according to Metin N. Gurcan, Ph.D., an associate professor of Biomedical Informatics at The Ohio State University Medical Center."The large multi-gigapixel images produced by these scanners contain a wealth of information potentially useful for computer-assisted disease diagnosis, grading and prognosis."

Follicular Lymphoma (FL) is one of the most common forms of non-Hodgkin Lymphoma occurring in the United States. FL is a cancer of the human lymph system that usually spreads into the blood, bone marrow and, eventually, internal organs.

A World Health Organization pathological grading system is applied to biopsy samples; doctors usually avoid prescribing severe therapies for lower grades, while they usually recommend radiation and chemotherapy regimens for more aggressive grades.

Accurate grading of the pathological samples generally leads to a promising prognosis, but diagnosis depends solely upon a labor-intensive process that can be affected by human factors such as fatigue, reader variation and bias. Pathologists must visually examine and grade the specimens through high-powered microscopes.

Processing and analysis of such high-resolution images, Gurcan points out, remain non-trivial tasks, not just because of the sheer size of the images, but also due to complexities of underlying factors involving differences in staining, illumination, instrumentation and goals. To overcome many of these obstacles to automation, Gurcan and medical center colleagues, Dr. Gerard Lozanski and Dr. Arwa Shana'ah, turned to the Ohio Supercomputer Center.

Ashok Krishnamurthy, Ph.D., interim co-executive director of the center, and Siddharth Samsi, a computational science researcher there and an OSU graduate student in Electrical and Computer Engineering, put the power of a supercomputer behind the process.

"Our group has been developing tools for grading of follicular lymphoma with promising results," said Samsi."We developed a new automated method for detecting lymph follicles using stained tissue by analyzing the morphological and textural features of the images, mimicking the process that a human expert might use to identify follicle regions. Using these results, we developed models to describe tissue histology for classification of FL grades."

Histological grading of FL is based on the number of large malignant cells counted in within tissue samples measuring just 0.159 square millimeters and taken from ten different locations. Based on these findings, FL is assigned to one of three increasing grades of malignancy: Grade I (0-5 cells), Grade II (6-15 cells) and Grade III (more than 15 cells).

"The first step involves identifying potentially malignant regions by combining color and texture features," Samsi explained."The second step applies an iterative watershed algorithm to separate merged regions and the final step involves eliminating false positives."

The large data sizes and complexity of the algorithms led Gurcan and Samsi to leverage the parallel computing resources of OSC's Glenn Cluster in order to reduce the time required to process the images. They used MATLAB® and the Parallel Computing Toolbox™ to achieve significant speed-ups. Speed is the goal of the National Cancer Institute-FUNDED research project, but accuracy is essential. Gurcan and Samsi compared their computer segmentation results with manual segmentation and found an average similarity score of 87.11 percent.

"This algorithm is the first crucial step in a computer-aided grading system for Follicular Lymphoma," Gurcan said."By identifying all the follicles in a digitized image, we can use the entire tissue section for grading of the disease, thus providing experts with another tool that can help improve the accuracy and speed of the diagnosis."


Source