Active contour methods for image segmentation allow a contour to deform iteratively to partition an image into regions. Active contours are often implemented with level sets. The primary drawback, however, is that they are slow to compute. This post presents a technical report describing, in detail, the sparse field method (SFM) proposed by Ross Whitaker [pdf], which allows one to implement level set active contours very efficiently. The algorithm is described in detail, specific notes are given about implementation, and source code is provided.

Fast Level Sets Demo

The links below point to the technical report and a demo written in C++/MEX that can be run directly in MATLAB. The demo implements the Chan-Vese segmentation energy, but many energies can be minimized using the provided framework.

Sparse Field Method – Technical Report [pdf]
Sparse Field Method – Matlab Demo [zip]

To run the MATLAB demo, simply unzip the file and run:
at the command line. On the first run, this will compile the MEX code on your machine and then run the demo. If the MEX compile fails, please check your MEX setup. The demo is for a 2D image, but the codes work for 3D images as well.

My hope is that other researchers wishing to quickly implement Whitaker’s method can use this information to easily understand the intricacies of the algorithm which, in my opinion, were not presented clearly in Whitaker’s original paper. Personally, these codes have SUBSTANTIALLY sped up my segmentations, and are allowing me to make much faster progress towards completing my PhD!

Thanks to Ernst Schwartz and Andy for helping to find small bugs in the codes and documentation. (they’re fixed now!)

This code can be used according to the MIT license. As long as this work is appropriately cited and attributed, and not being used for proprietary or commercial purposes, I’m fully supportive of you using it. Please drop me a line if it helps you!

For more information regarding active contour, segmentation, and computer vision, check here: Computer Vision Posts

95 thoughts on “Sparse Field Active Contours

  1. Hi Shawn,

    Could you please tell me the reason why you did not normalize the curvature force like you did for the external region force?
    I am referring to this line btw.
    F = F/(max(abs(F))+eps) – 1.0*kappa(phi, C);


  2. Dear Shawn,

    It’s easy and useful of you provided code. It is great work. I will study it careful and do some applications. I will cite the paper of yours. It seems that, you are a very kind man.

    Thanks and Regards.


  3. I can’t get it to run. I’m using R2009b, and I have the path set correctly. I get:

    >> sfm_chanvese_demo
    ??? Undefined function or variable ‘sfm_chanvese_demo’.

    So I tried:

    >> sfm_local_demo
    ??? Undefined function or method ‘fat_contour’ for input arguments of type

    Error in ==> visualize_phi at 4
    [h1 h2] = fat_contour(phi);

    Error in ==> sfm_local_chanvese at 79
    [phi c] = sfm_local_chanvese_mex(img,mask,iterations,display,lam,rad);

    Error in ==> sfm_local_demo at 22
    [seg] = sfm_local_chanvese(img,mask,iterations,lambda,rad);

    Any idea what I’m doing wrong?

  4. Okay, got it. I had to

    1. Download fat_contour.m
    2. Comment out the delete(h1) and delete(h2)
    3. make the change described above with the ~
    4. clear all

    And then it works!

  5. Hey Shawn:

    So apparently the technical write up is broken or something can’t really open it. Can you upload another copy? Thanks.

  6. Dear Shawn,

    Thank you for your thorough work on the sparse field level set algorithm. It is helpful. I would like to ask if you have any idea about how to deal with anisotropic 2D or 3D image spacing. Apart from the correct calculation of the gradients and the forces, the sparse field algorithm supposes that the neighbors of, say, the zero list member, are all either layer-1 or layer-minus-1 members, whereas when the z-factor (i.e. intra slices spacing over inter slice spacing) is larger than 1.5 this is impossible. Do I miss anything?
    Thanks a lot

  7. @Hadar
    As you said, correct computation of the gradients (i.e., based on the true inter-pixel spacing) is important.

    However, as long as the spacing is on a regular grid (even if x, y, and z spacings are different), the SFM should continue to work because of the floating point “level set” structure that’s maintained. That structure allows you to interpolate the “zero level set” at sub-pixel resolution regardless of pixel spacing.

    You can also consider having more adjacent layers if you’re spacing gets really huge! Whittaker describes how to do this, and it’s a trival extension of my published algorithm.

    Finally, a colleague of mine wrote codes to perform SFM on very non-uniform data (e.g., point clouds). I could try to track those codes down if people would find them helpful.

  8. hello,

    First thanks for your program. really thank you.

    i have 2 questions.

    first about CFL condition i don’t understand why this condition and also why you multiply it by dphidt in order to evolve contour.

    Second: why the expression of curvature is different from whitaker(page 10 “A Level-Set Approach to 3D Reconstruction From Range Data”).

    thanks an advance

  9. hi every body,

    i test the region_seg_demo. it works very well. but i have some questions about the equations used in this code.

    F = (I(idx)-u).^2-(I(idx)-v).^2; % force from image information
    curvature = get_curvature(phi,idx); % force from curvature penalty
    dphidt = F./max((F)) + alpha*curvature; % gradient descent to minimize energy
    %– maintain the CFL condition
    dt = 3/(max(dphidt)+eps);

    %– evolve the curve
    phi(idx) = phi(idx) + dt.*dphidt;

    %– Keep SDF smooth
    phi = sussman(phi, .5);

    could some one clarify my these equations.

    thanks an advance.

  10. i have error in sfm_local_chanvese_mex.cpp file.
    Error is as follow:
    error C2660: ‘en_chanvese_compute’ : function does not take 6 arguments
    Please suggest solution.

  11. Is the algorithm in your technical report up to date?
    Since I don’t think that it preserves the distance invariant, which leads to inconsistencies in the distance function, duplicates in the lists, and ends in a mess.

  12. Thanks for replying.

    So it probably my mistake, since I’m not sure what is the invariant the algorithm keeps in each level. For example I assumed the algorithm tries to preserve for each level its data range. For example Lp2 should always have values between (1.5,2.5]. But according to line 7 at procedure 2, p would still have a Lp1 value and it would be transferred to Lp2. I find it hard to follow all the cases, and see if this is crucial or not, but currently I admit that the algorithm behaves well, and my previous impression was due to a bug in my implementation. Maybe the worst that could happen are orphan cells (as implied by [Whitaker98], or wrong approximation of the distance function at higher levels. I would appreciate your opinion on the matter.


  13. Also I was wondering if you noticed any precision degradation due to reinitialization of the level set in each iteration. To quote [Sethian99] ‘Motion by Intrinsic Laplacian of Curvature’: “The use of reinitialization provides the extra stability required to complete the numerical method, however it comes at a price. Reinitialization can also be seen as a form of artificial diffusion. The amount of artificial diffusion added to the problem and the amount of the error introduced in slight movement of the interface varies between reinitialization methods. In any case we use reinitialization as little as possible without compromising the stability of the method”.

    Which means according to Sethian that your method should be very stable (you also preserve the CFL condition by normalizing the force, and limiting it to 0.5), but at a price of precision.

    Did you try a fourth order flow such as the Laplacian curvature of Willmore flows?

  14. Concerning my remark from September 19th, 2011. There is a basic thing that I don’t understand here, and I’ll appreciate an explanation.
    Why don’t we simply in each iteration calculate F for the zero level set, then reconstruct from scratch all other levels? More specifically:

    1. Calculate F for Lz.
    2. Go over all the other lists Ln[], Lp[], and delete them, while setting the cells they were pointing to to the highest label (+/-3 in your case).
    3. Iterate Lz:
    3.1 Update the cell value according to F.
    3.2 If the value is still in Lz interval, add the neighbors with label > 0 to Lp1 with a value of the Lz cell + 1, and neighbors with label i
    to Lp(i+1). Update the value of all neighbors with label i+1 to Lpi’s value + 1.

    It would replace your 2,3 procedures, won’t need the intermediate Sx lists, would be faster, and it would be cleaner, in the sense that it would keep the distance invariant.

  15. @Zohar
    Thanks for the comments!

    The approach you lay out may be faster in some situations (e.g., where there are few voxels in Lz). In the case where the number of voxels gets large (and only a sub-set of those voxels cross the zero-level-set on each iteration) the method presented here will be MUCH faster.

    In terms of invariant distances, it’s important to ensure floating-point accuracy on the phi function to ensure sub-voxel accuracy of the surface.

  16. I urge you to please consider my point again. I think that my method is faster: In your method you go over all the items in all the lists, update the values (we update the same way with floating point accuracy), and then move the necessary elements to Sx lists, including new discovered cells. Then you go over the Sx lists, and move the items to their corresponding Lx. I also iterate the *same* amount of items in the lists, when I rediscover them from scratch, and instead of the Sx lists, I deleted the items beforehand. You can argue that allocating new cells for the list would take time, but you can hold an unoccupied cell pointer list, and just move pointers from list to list without any allocation. Again the time you spend to update an element according to its neighbors, would be the same amount of time I spend to rediscover the item from scratch.

    Let’s check my claim on an extreme case, where your method should have a better advantage over mine. The new F would be zero for all Lz. I would iterate once over Ln and Lx clearing all cells value. Then I would iterate Lz again, and rediscover all the neighbors. It would mean that if we have N cells in the the active, I would have 2N accesses. You on the hand would have only N access.
    Considering the other extreme case that the whole Lz moved, you would need to relocate all your items through Sx. So you’ll spend 3N, while I still spend 2N. Then we should consider the amortized average case, although if you normalize the force as you do, the level set should change quite often.
    Well, I admit my examples were a bit sketchy, but I hope that you get the idea, that we access the memory, and move the list points quite in the same amount, and I don’t think it would matter much in terms of performance.

    Why my solution is cleaner: The code is shorter and simpler, and it’s really easy to track the changes, and prove that the distance invariant is kept (cell in level i has it value in the allowed distance range), as opposed to the one example I gave you that your invariant can fail (I noticed it also in my experiments).

  17. Shawn Lankton

    Hi, thank you for sharing this code, it’s works for what I want to do. but it hard for me to understand all of the mathematical equations on your paper “Localized Region Based Active Contours”(code:Chan&Vese code); I wondering if you have any link where can I find the explication or more details?. if you have something, this is my @mail:

    thanks and regards

  18. Hi Shawn, thanks for sharing code.
    Please, can you explain me why you use
    seg = phi<=0
    to create the final label map result instead of
    seg = phi<=0.5
    since the range for the zero level set in phi is [-0.5,0.5]?
    thanks in advance

  19. Hi Shawn, thanks for sharing code.
    Please, can you explain me why you use
    seg = phi<=0
    to create the final binary map instead of
    seg = phi<=0.5
    if the range for the zero level set in phi is [-0.5,0.5]?
    thanks in advance

  20. Hi Shawn, thanks for sharing code.
    Please, can you explain me why I can’t get it to run. I’m using R2007b. I get while running demo code:

    Compiling sfm_chanvese_mex…
    Error: Could not detect a compiler on local system
    which can compile the specified input file(s)

    C:\PROGRA~1\MATLAB\R2007B\BIN\MEX.PL: Error: No compiler options file could be found to compile source code. Please run “mex -setup” to rectify.

    ??? Error using ==> mex at 208
    Unable to complete successfully.

    Error in ==> compile_sfm_local_chanvese at 6
    mex sfm_local_chanvese_mex.cpp llist.cpp energy3c.cpp lsops3c.cpp

    Error in ==> sfm_local_chanvese at 30

    Error in ==> sfm_local_demo at 22
    [seg] = sfm_local_chanvese(img,mask,iterations,lambda,rad);

    thanks an advance.

  21. Is there any one can tell me how to decide the which part of the mask equals to 0?
    “mask(86:218,109:238) = 0”. Or how is the number 86:218 gotten?

    Thank you very much.

  22. bonjour sawan peut ton donne un methode pour ecrier le code de contour actif en VHDL

  23. bonjour Dear Shawn

    peut ton donne un methode pour ecrier le code de contour actif en VHDL

  24. hi sir,
    first and foremost tnx 4 d code. when im trying to run it im getting following errors. plz anybody help me out. im using it for some medical applications. plz help me fast as soon as possible.
    ??? Error: File: sfm_local_chanvese_mex.m Line: 6 Column: 21
    Unexpected MATLAB operator.

    Error in ==> sfm_local_chanvese at 79
    [phi c] = sfm_local_chanvese_mex(img,mask,iterations,display,lam,rad);

    Error in ==> sfm_local_demo at 23
    [seg] = sfm_local_chanvese(img,mask,iterations,lambda,rad);

  25. Hi,first thanks for sharing your code.
    I have a problem, you use double link sparse field method to evlove the curve. can I change it into a single link? If I change it into a single link, is there any problem?

  26. Nothing comes to mind as to why it wouldn’t work… I’d say try it and see how it goes! If it works and it’s faster let me know and I’ll link to your code.

  27. @Shawn Lankton
    yes, I have changed the code to single link(refer your code and J Malcolm’s code), it can work. I test the code on my data set, The running time is not declining, but the accuracy is increased. It is strange. I don’t know why it is happend.

  28. @Shawn Lankton hey can you please tell me which papers or which articles you used for this script. I just want to know on which parameters i can play this algorithm.

  29. Sorry, I make an error in the program. Single link is more faster than double link and the accuracy is about the same….Thanks! @Dily

  30. Hi All,

    I am looking for code for level set based shape prior segmentation. Does anybody know whether such a code is available?


  31. Hello sir,

    I need a code for detecting lip, lip motion and lip segmentation. Plz help me to find the code for this.


Leave a Reply

Your email address will not be published.