Analysis

Technique improves accuracy of computer vision technologies

21st June 2016
Enaie Azambuja
0

Researchers from North Carolina State University have developed the latest technique that improves the ability of computer vision technologies to better identify and separate objects in an image, a process called segmentation. Image processing and computer vision are important for multiple applications, from autonomous vehicles to detecting anomalies in medical imaging.

Computer vision technologies use algorithms to segment, or outline the objects, in an image. For instance, separating the outline of a pedestrian against the backdrop of a busy street.

These algorithms rely on defined parameters - programmed values - to segment images. For example, if there is a shift in color that crosses a specific threshold, a computer vision program will interpret it as a dividing line between two objects. And that specific threshold is one of the algorithm's parameters.

But there's a challenge here. Even small changes in a parameter can lead to very different computer vision results. For example, if a person crossing the street walks in and out of shady areas, that would affect the color a computer sees - and the computer may then "see" the person disappearing and reappearing, or interpret the person and the shadow as a single, large object such as a car.

"Some algorithm parameters may work better than others in any given set of circumstances, and we wanted to know how to combine multiple parameters and algorithms to create better image segmentation by computer vision programs," says Edgar Lobaton, an assistant professor of electrical and computer engineering at NC State and senior author of a paper on the work.

Lobaton and Ph.D. student Qian Ge developed a technique that compiles segmentation data from multiple algorithms and aggregates them, creating a new version of the image. This new image is then segmented again, based on how persistent any given segment is across all of the original input algorithms.

Lobaton notes that the image segmenting technique can be used in real time, processing 30 frames per second. This is due, in part, to the fact that most of the computational steps can be run in parallel, rather than sequentially.

The paper, "Consensus-Based Image Segmentation via Topological Persistence," will be presented July 1 at the IEEE Conference on Computer Vision and Pattern Recognition in Las Vegas, Nev.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier