Making Active Contours Fast

Active contours are a method of image segmentation. They are well-loved for their accuracy, ease of implementation, and nice mathematical underpinnings. However, a full level-set implementation can be quite slow, especially when dealing with large data! Here are some tips to speed things up. By combining these ideas and solid programming techniques I’ve been able to get active contour trackers running at hundreds of frames per second!
Continue reading “Making Active Contours Fast”

Sparse Field Active Contours

Active contour methods for image segmentation allow a contour to deform iteratively to partition an image into regions. Active contours are often implemented with level sets. The primary drawback, however, is that they are slow to compute. This post presents a technical report describing, in detail, the sparse field method (SFM) proposed by Ross Whitaker [pdf], which allows one to implement level set active contours very efficiently. The algorithm is described in detail, specific notes are given about implementation, and source code is provided.

Fast Level Sets Demo

The links below point to the technical report and a demo written in C++/MEX that can be run directly in MATLAB. The demo implements the Chan-Vese segmentation energy, but many energies can be minimized using the provided framework.

Sparse Field Method – Technical Report [pdf]
Sparse Field Method – Matlab Demo [zip]

To run the MATLAB demo, simply unzip the file and run:
>>sfm_chanvese_demo
at the command line. On the first run, this will compile the MEX code on your machine and then run the demo. If the MEX compile fails, please check your MEX setup. The demo is for a 2D image, but the codes work for 3D images as well.

My hope is that other researchers wishing to quickly implement Whitaker’s method can use this information to easily understand the intricacies of the algorithm which, in my opinion, were not presented clearly in Whitaker’s original paper. Personally, these codes have SUBSTANTIALLY sped up my segmentations, and are allowing me to make much faster progress towards completing my PhD!

Thanks to Ernst Schwartz and Andy for helping to find small bugs in the codes and documentation. (they’re fixed now!)

This code can be used according to the MIT license. As long as this work is appropriately cited and attributed, and not being used for proprietary or commercial purposes, I’m fully supportive of you using it. Please drop me a line if it helps you!

For more information regarding active contour, segmentation, and computer vision, check here: Computer Vision Posts

PhD Thesis Proposal Presentation

This week I made a presentation to my thesis committee at Georgia Tech to propose the content that will make up my Ph.D. dissertation. I’m happy to say that it went well and I’m on-track to graduate in September of 2009. The video below is an abridged version of the presentation I gave. It’s about 15 minutes long, and gives a general idea of the work I’ve been doing over the past three years as well as what I hope to accomplish before I finish. In a sentence, I propose a way to analyze image statistics locally that improves performance in several medical image processing applications.


On a side note, people interested in creating screen-casts of presentations on a Mac, should consider the program ScreenFlow, which worked great for me! This was also my first presentation created with Apple’s Keynote software, but I’m sure it won’t be the last.

CVPR 2008 Wrap-Up and Selected Papers

I return today from a week-long trip to Anchorage, Alaska. I spent the week enjoying the beautiful mountains, and the exciting science being presentented at the Conference for Computer Vision and Pattern Recognition (CVPR 2008) [here are some links to lots of papers from the conference]. This was my first trip to this conference, and I must say that I was impressed with the quality of the work presented. Below, I list some of my favorite papers and give a (very) brief overview:

Continue reading “CVPR 2008 Wrap-Up and Selected Papers”

Tracking Through Changes in Scale

I will be presenting “Tracking Through Changes in Scale” at the International Conference on Image Processing (ICIP) in San Diego in October, 2008. This tracker uses a two-phase template matching algorithm in conjunction with a novel template update scheme to keep track of objects as their appearance and size changes drastically over the course of a video sequence.

The pdf, presentation material, and citation information will be available on the publications page after the conference. Below are videos of the experiments shown in the paper:

 
LEAVES Sequence (High Resolution Download – 11.2Mb)

 
VEHICLE Sequence (High Resolution Download – 34.8Mb)

 
BOAT Sequence (Hi Resolution Download – 2.34Mb)

Tracking and Surveillance Projects

I took a special topics course in Spring 2008 at Georgia Tech, ECE 8893: Embedded Video Surveillance Systems. The course included three projects, each shown below. Detailed information about the algorithm is in the source code comments. (All the source is in Python)

Project 1: Activity Density Estimation

Use background subtraction to find moving foreground objects in a video sequence. Then, color-code regions with the most activity. Here is the result:

Source: p1.py

Project 2: Styrofoam Airplane Tracking

Find all white styrofoam planes in the scene and track them throughout the scene. We used color thresholding and simple dynamics to do the tracking.

Source: p2.py

Project 3: Pedestrian Tracking

Count and track the pedestrians that cross on a busy sidewalk. We use a combination of motion estimation via background subtraction and feature matching using the Bhattacharyya measure.

Source: p3.py
Final Report: p3.pdf

Most of this code is very hack-y because it was done quickly. However, it was
fun to learn Python, and the class was enjoyable overall.

Fast 3D Stereo Vision

Recently, I started looking at faster ways to perform dense stereo matching for some work with 3D video. After some experimentation, I found out that by using a selective mode filter paired with naive correspondence matching, I was able to get satisfactory results very quickly. Check out the slide show below for some results!



[red indicates close, blue indicates far away]

 

Here is a download-able Matlab demo, which should work on any pre-aligned stereo image pairs:

stereo_modefilt.zip

The entire code is written in Matlab/C++/MEX. The stereo matching is all in Matlab, and the selective mode filter is coded in C++ and callable from Matlab (meaning it must be compiled before it can run). Currently, the correspondence is the major bottleneck, so anyone who can improve this, please let me know.

This code can be used according to the MIT license. As long as this work is appropriately cited and attributed, and not being used for proprietary or commercial purposes, I’m fully supportive of you using it. Please drop me a line if it helps you!

Brain Science and Computer Vision

Everyone talks about like the brain like its a computer. Well, in some ways its similar. There are nerve cells that act like “wires” and run from one part of the brain to another. With the advent of a new kind of medical imaging technology called “Diffusion Weighted MRI” (DW-MRI), it is possible to find these wires using computer vision. Many people claim that it is important to find whole bundles of these wires in addition to the individual wires.

Recently, I’ve been working on a way to do this using partial differential equations (PDEs). Below you can see some of the results. First, thin white tubes are shown. These represent a single “wires” called fibers. From these single fibers, we determine the boundary of the whole bundle. We then show this as thick yellow tubes.

By studying the shape and size of these bundles, doctors may be able to detect mental illness early and improve the understanding of the brain! Hopefully I can help by making these pretty pictures : )

Active Contour Matlab Code Demo

UPDATE:
My new post: Sparse Field Active Contours
implements quicker, more accurate active contours.

Today, I added demo code for the Hybrid Segmentation project. This segmentation algorithm (in the publications section) can be used to find the boundary of objects in images. This approach uses localized statistics and sometimes gets better results than classic methods. For an example, see the video below: The contour begins as a rectangle, but deforms over time so that it finally forms the outline of the monkey.

This can be used to segment many different classes of image. To try it out, download the demo below and run >>localized_seg_demo

localized_seg.zip

This code is based on a standard level set segmentation; it just optimizes a different energy. I’ve also made a demo which implements the well-known Chan-Vese segmentation algorithm. This technique is similar to the one above, but it looks at global statistics. This makes it more robust to initialization, but it also means that more constraints are placed on the image. Download it and see what you think! Again, unzip the file and run >>region_seg_demo

sfm_chanvese_demo.zip (New! Described Here)
regionbased_seg.zip (old and slow)

This code can be used according to the MIT license. As long as this work is appropriately cited and attributed, and not being used for proprietary or commercial purposes, I’m fully supportive of you using it. Please drop me a line if it helps you!