1 / 21

Convolutional Decision Trees

Convolutional Decision Trees. Dmitry Laptev, Joachim M. Buhmann Machine Learning Lab, ETH Zurich. Connectomics *. * Reconstruction and study of Connectome : a map of neuron connections. Connectomics. Hard to automate On a whole-brain scale it is simply necessary Good techniques are:

faris
Download Presentation

Convolutional Decision Trees

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Convolutional Decision Trees Dmitry Laptev, Joachim M. Buhmann Machine Learning Lab, ETH Zurich Dmitry Laptev

  2. Connectomics* * Reconstruction and study of Connectome: a map of neuron connections Dmitry Laptev

  3. Connectomics • Hard to automate • On a whole-brain scale it is simply necessary • Good techniques are: • Require huge training sets • Very slow to train • Infeasible in one CPU • Convolutional Decision Trees • Trade-off between quality and speed • Making Connectomics fast Dmitry Laptev

  4. Other Computer Vision tasks Segmentation tasks usually start with computing per-pixel probabilities Dmitry Laptev

  5. General pipeline Φ Random Forest Graph Cuts Φ is a set of features The problem is usually here Dmitry Laptev

  6. Staticpredefinedfeatures • Generic features • HoG features, • Gaussian blur, • Sobel filters, • SIFT features • Domain-specifically designed • Line Filter Transform for blood vessel segmentation, • Context Cue features for synapse detection Dmitry Laptev

  7. Feature learning + no feature design + data-specific features – not task-specific • Unsupervised feature learning • Bag of Visual Words, • Sparse Coding, • Autoencoders • Supervised features learning • Sparse coding (quadratic), • KernelBoost, • Convolutional Neural Networks (CNN) • Convolutional Decision Trees + task-specific features – either restricted class – or very slow to train Dmitry Laptev

  8. Convolutional Decision Trees • Learn informative featuresonebyone • Treesplitis a convolutionwithsomekernel • Maximizethe relaxed informationgain • Introducesmoothnessregularization • Combine thesefeaturesto form a decisiontree • Grow the tree while the performance increases • Adjust the regularization while growing Dmitry Laptev

  9. Relaxed information gain • Obliquedecisiontreesnotation: • - - vectorizedimagepatch • - vectorizedconvolutionkernel • Split predicate: • Relaxed splitpredicate: Dmitry Laptev

  10. Relaxed information gain • The notationof Information Gainisalmostthe same... • ... butnowsmooth! Dmitry Laptev

  11. Relaxed information gain Dmitry Laptev

  12. Regularized information gain • SmoothnesstermГ... • ...makesthefunctional„moreconvex“ Dmitry Laptev

  13. Regularized information gain Dmitry Laptev

  14. Learning one informative feature CDT L-BFGS α := 2*α * we refer to the paper for all the details Dmitry Laptev

  15. Examplesoflearnedfeatures Left: static generic features; right: features learned with CDT Features are interpretable: there are edge detectors, curvature detectors, etc. Dmitry Laptev

  16. Results: Weizmann Horse dataset • (a): ground truth; (b) – (e): results for different tree depth • (top): per-pixel probabilities; (bottom): graph-cut segmentation Dmitry Laptev

  17. Results: Weizmann Horse dataset CDT is better than any general local technique, but worse than task-specific Dmitry Laptev

  18. Results: Drosophila VNC segmentation From left to right: results of anisotropic RF, CDT and CNN Dmitry Laptev

  19. Results: Drosophila VNC segmentation A week in GPU, A year in CPU 10 hours in CPU CDT is 4.5% better than the second-best technique and only 2.2% worse than CNN CDT requires 10 hours training in 1 CPU, while CNN requires a week in GPU-cluster Dmitry Laptev

  20. Summary • ConvolutionalDecisionTrees: • Much faster (over-nightexperiment) • Requirenospecialhardware • Trade-off betweenthespeedandthequality • Consistofthreemaincomponents: • Relaxed informationgain • Strictlyconvexregularizationterm • Iterative optimizationalgorithm Dmitry Laptev

  21. Thanks for your attention! Questions / ideasarewelcome Dmitry Laptev

More Related