1 / 68

Deep Segmentation

Deep Segmentation. Jianping Fan CS Dept UNC-Charlotte. Deep Segmentation Networks. DeepLab v1, v2, v3 U-Nets Fast R-CNN Mask R-CNN. Introduction of DeepLab. What is semantic image segmentation?. Partitioning an image into regions of meaningful objects.

mrech
Download Presentation

Deep Segmentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Deep Segmentation Jianping Fan CS Dept UNC-Charlotte

  2. Deep Segmentation Networks • DeepLab v1, v2, v3 • U-Nets • Fast R-CNN • Mask R-CNN

  3. Introduction of DeepLab What is semantic image segmentation? • Partitioning an image into regions of meaningful objects. • Assignan object categorylabel.

  4. Introduction of DeepLab DCNN and image segmentation Select maximal score class • What happens in each standard DCNN layer? • Striding • Pooling Class prediction scores for each pixel DCNN

  5. Introduction DCNN and image segmentation • Poolingadvantages: • Invariance to small translations of the input. • Helps avoid overfitting. • Computational efficiency. • Striding advantages: • Fewer applications of the filter. • Smaller output size.

  6. Introduction DCNN and image segmentation • What are the disadvantages for semantic segmentation? • Down-sampling causes loss of information. • The input invariance harms the pixel-perfect accuracy. • DeepLab address those issues by: • Atrousconvolution (‘Holes’ algorithm). • CRFs(Conditional Random Fields).

  7. Up-Sampling Addressing the reduced resolution problem • Possible solution: • ‘deconvolutional’ layers (backwards convolution). • Additional memory and computational time. • Learning additional parameters. • Suggested solution: • Atrous (‘Holes’) convolution

  8. Atrous (‘Holes’) Algorithm • Remove the down-sampling from the last pooling layers. • Up-sample the original filter by a factor of the strides:Atrous convolution for 1-D signal: • Note: standard convolution is a special case for rater=1. Introduce zeros between filter values Chen, Liang-Chieh, et al. "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs." arXiv preprint arXiv:1606.00915 (2016).

  9. Atrous (‘Holes’) Algorithm Standard convolution Atrous convolution Chen, Liang-Chieh, et al. "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs." arXiv preprint arXiv:1606.00915 (2016).

  10. Atrous (‘Holes’) Algorithm Filters field-of-view • Small field-of-view → accurate localization • Large field-of-view → context assimilation • ‘Holes’: Introduce zeros between filter values. • Effective filter size increases (enlarge the field-of-view of filter): • However, we take into account only the non-zero filter values: • Number of filter parameters is the same. • Number of operations per position is the same. Chen, Liang-Chieh, et al. "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs." arXiv preprint arXiv:1606.00915 (2016).

  11. Atrous (‘Holes’) Algorithm Originalfilter Standard convolution Padded filter Atrous convolution Chen, Liang-Chieh, et al. "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs." arXiv preprint arXiv:1606.00915 (2016).

  12. Boundary recovery • DCNN trade-off:Classification accuracy ↔ Localization accuracy • DCNN score maps successfully predict classification and rough position. • Less effective for exactoutline. Chen, Liang-Chieh, et al. "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs." arXiv preprint arXiv:1606.00915 (2016).

  13. Boundary recovery • Possible solution: super-pixelrepresentation. • Suggested Solution: fully connected CRFs. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Semantic image segmentation with deep convolutional nets and fully connected CRFs,” in ICLR, 2015. https://www.researchgate.net/figure/225069465_fig1_Fig-1-Images-segmented-using-SLIC-into-superpixels-of-size-64-256-and-1024-pixels

  14. Conditional Random Fields Problem statement • - Random field of input observations (images) of size N. • - Set of labels. • - Random field of pixel labels. • - color vector of pixel j. • - label assigned to pixel j. • CRFs are usually used to model connections between different images. • Here we use them to model connection between image pixels! P. Krahenbuhl and V. Koltun, “Efficient inference in fully connected CRFs with Gaussian edge potentials,” in NIPS, 2011.

  15. Probabilistic Graphical Models • Graphical Model • Factorization- a distribution over many variables represented as a product of local functions, each depends on a smaller subset of variables. C. Sutton and A. McCallum, “An introduction to Conditional Random Fields”, Foundations and Trends in Machine Learning, vol. 4, No. 4 (2011) 267–373

  16. Probabilistic Graphical Models • Undirected vs. Directed • G(V, F, E) Directed Undirected C. Sutton and A. McCallum, “An introduction to Conditional Random Fields”, Foundations and Trends in Machine Learning, vol. 4, No. 4 (2011) 267–373

  17. Conditional Random Fields Fully connected CRFs • Definition: • Z(X) - is an input-dependent normalization factor. • Factorization (energy function): • y - is the label assignment for pixels. P. Krahenbuhl and V. Koltun, “Efficient inference in fully connected CRFs with Gaussian edge potentials,” in NIPS, 2011. C. Sutton and A. McCallum, “An introduction to Conditional Random Fields”, Foundations and Trends in Machine Learning, vol. 4, No. 4 (2011) 267–373

  18. Conditional Random Fields Potential functions in our case • - is the label assignment probability for pixel i computed by DCNN. • - position of pixel i. • - intensity (color) vector of pixel i. • - learned parameters (weights). • - hyper parameters (what is considered “near” / “similar”). Chen, Liang-Chieh, et al. "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs." arXiv preprint arXiv:1606.00915 (2016).

  19. Conditional Random Fields Potential functions in our case • Bilateral kernel – nearby pixels with similar color are likely to be in the same class. • - what is considered “near” / “similar”). Pixels “nearness” Pixels color similarity Chen, Liang-Chieh, et al. "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs." arXiv preprint arXiv:1606.00915 (2016).

  20. Conditional Random Fields Potential functions in our case • – uniform penalty for nearby pixels with different labels. • Insensitive to compatibility between labels! P. Krahenbuhl and V. Koltun, “Efficient inference in fully connected CRFs with Gaussian edge potentials,” in NIPS, 2011.

  21. Boundary recovery Score map Belief map L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Semantic image segmentation with deep convolutional nets and fully connected CRFs,” in ICLR, 2015.

  22. DeepLab • Group: • CCVL (Center for Cognition, Vision, and Learning). • Basis networks (pre-trained for ImageNet): • VGG-16(Oxford Visual Geometry Group, ILSVRC 2014 1st). • ResNet-101(Microsoft Research Asia, ILSVRC 2015 1st). • Code: https://bitbucket.org/deeplab/deeplab-public/

  23. U-Net

  24. Simple Recipe for Classification +Localization Convolution and Pooling Fully-connected layers Softmaxloss Final conv feature map Classscores Image Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -13

  25. Simple Recipe for Classification +Localization Fully-connected layers “Classification head” Convolution and Pooling Classscores Fully-connected layers “Regression head” Final conv feature map Boxcoordinates Image Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -14

  26. Simple Recipe for Classification +Localization Fully-connected layers Convolution and Pooling Classscores Fully-connected layers L2 loss Final conv feature map Boxcoordinates Image Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -15

  27. Simple Recipe for Classification +Localization Fully-connected layers Convolution and Pooling Classscores Fully-connected layers Final conv feature map Boxcoordinates Image Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -16 1 Feb2016

  28. Per-class vs class agnosticregression Fully-connected layers Classification head: C numbers (one per class) Convolution and Pooling Classscores Class agnostic: 4 numbers (one box) Class specific: C x 4 numbers (one box per class) Fully-connected layers Final conv feature map Boxcoordinates Image Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -17 1 Feb2016

  29. Aside: Localizing multipleobjects Want to localize exactly K objects in each image Fully-connected layers (e.g. whole cat, cat head, cat left ear, cat right ear for K=4) Convolution and Pooling Classscores Fully-connected layers K x 4numbers (one box perobject) Final conv feature map Boxcoordinates Image Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -19 1 Feb2016

  30. Sliding Window:Overfeat Class scores: 1000 4096 4096 Winner of ILSVRC 2013 localization challenge FC FC Softmax loss Convolution +pooling FC FC FC FC Feature map: 1024 x 5 x 5 Euclidean loss Image: 3 x 221 x221 Boxes: 1000 x 4 1024 4096 Sermanet et al, “Integrated Recognition, Localization and Detection using Convolutional Networks”, ICLR 2014 Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -23 1 Feb2016

  31. Sliding Window:Overfeat Network input: 3 x 221 x 221 Larger image: 3 x 257 x 257 Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -24

  32. Sliding Window:Overfeat Network input: 3 x 221 x 221 Classification scores: P(cat) Larger image: 3 x 257 x 257 Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -25

  33. Sliding Window:Overfeat Network input: 3 x 221 x 221 Classification scores: P(cat) Larger image: 3 x 257 x 257 Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -26

  34. Sliding Window:Overfeat Network input: 3 x 221 x 221 Classification scores: P(cat) Larger image: 3 x 257 x 257 Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -27

  35. Sliding Window:Overfeat Network input: 3 x 221 x 221 Classification scores: P(cat) Larger image: 3 x 257 x 257 Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -28

  36. Sliding Window:Overfeat Network input: 3 x 221 x 221 Classification scores: P(cat) Larger image: 3 x 257 x 257 Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -29

  37. Sliding Window:Overfeat Greedily merge boxes and scores (details in paper) 0.8 Network input: 3 x 221 x 221 Classification score: P (cat) Larger image: 3 x 257 x 257 Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -30

  38. Sliding Window:Overfeat In practice use many sliding window locations and multiple scales Window positions + scoremaps FinalPredictions Box regressionoutputs Sermanet et al, “Integrated Recognition, Localization and Detection using Convolutional Networks”, ICLR 2014 Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -31

  39. Efficient Sliding Window: Overfeat4096 4096 Class scores: 1000 FC FC Convolution +pooling FC FC FC FC Feature map: 1024 x 5 x 5 Image: 3 x 221 x221 Boxes: 1000 x 4 1024 4096 Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -32 1 Feb2016

  40. Efficient Sliding Window:Overfeat Efficient sliding window by converting fully- connected layers into convolutions 4096 x 1 x 1 Class scores: 1000 x 1 x 1 1024 x 1 x1 Convolution +pooling 1 x 1conv 1 x 1conv 5 x 5 conv 5 x 5 conv Feature map: 1024 x 5 x 5 1 x 1conv 1 x 1conv Image: 3 x 221 x221 4096 x 1 x1 1024 x 1 x1 Box coordinates: (4 x 1000) x 1 x 1 Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -33 1 Feb2016

  41. Efficient Sliding Window:Overfeat Training time: Small image, 1 x 1 classifier output Test time: Larger image, 2 x 2 classifier output, only extra compute at yellow regions Sermanet et al, “Integrated Recognition, Localization and Detection using Convolutional Networks”, ICLR 2014 Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -34

  42. ImageNet Classification +Localization AlexNet: Localization method not published Overfeat: Multiscale convolutional regression with box merging VGG: Same as Overfeat, but fewer scales and locations; simpler method, gains all due to deeper features ResNet: Different localization method (RPN) and much deeper features Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -35

  43. Computer VisionTasks Classification +Localization Instance Segmentation ObjectDetection Classification Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -36 1 Feb2016

  44. Computer VisionTasks Classification +Localization Instance Segmentation ObjectDetection Classification Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -37 1 Feb2016

  45. Detection asRegression? DOG, (x, y, w,h) CAT, (x, y, w,h) CAT, (x, y, w,h) DUCK (x, y, w,h) = 16numbers Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -38

  46. Detection asRegression? DOG, (x, y, w,h) CAT, (x, y, w,h) = 8numbers Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -39

  47. Detection asRegression? CAT, (x, y, w, h) CAT, (x, y, w, h) …. CAT (x, y, w, h) = many numbers Need variable sized outputs Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -40

  48. Detection asClassification CAT?NO DOG?NO Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -41

  49. Detection asClassification CAT?YES! DOG?NO Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -42

  50. Detection asClassification CAT?NO DOG?NO Fei-Fei Li & Andrej Karpathy &JustinJohnson Lecture8 - 1 Feb2016 Lecture 8 -43

More Related