1 / 20

Advanced Image Processing Techniques for Physics Studies

Advanced Image Processing Techniques for Physics Studies . T. Craciunescu and A. Murari with contribution from: G. Kocsis , P. Lang, I. Tiseanu , J. Vega and JET EFDA Contributors *

meda
Download Presentation

Advanced Image Processing Techniques for Physics Studies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Image Processing Techniques for Physics Studies T. Craciunescu and A. Murari with contribution from: G. Kocsis, P. Lang, I. Tiseanu, J. Vega and JET EFDA Contributors * *See the Appendix of F. Romanelli et al., Fusion Energy Conference 2008 (Proc. 22nd Int. FEC Geneva, 2008) IAEA, (2008) Workshop on Fusion Data Processing Validation and Analysis, ENEA-Frascati (26-28 March 2012)

  2. Optical flow - extraction of advanced information for control and physics studies • - Implementation (CLG, MPEG) • - Application to pellets and instability tracking Automatic instability detection - Phase congruency image classification - Sparse image representation for disruption prediction - Interest points and local features for image identification

  3. Optical flow Basic assumptions: the grey values of image objects in subsequent frames do not change over time attempt to find the vector field which describes how the image is changing with time small displacements: frame t optical flow frame (t+1) • Ill-posed problem: • small perturbations in the signal can create large fluctuations in its derivatives • undetermined set of equations Etlinger Tor sequence

  4. Combined local-global (CLG) method Assumes that the unknown optic flow vector is constant within some neighbourhood of size ρ. Incorporates a global smoothness assumption for the estimated flow field. • Larger values for α result in a stronger penalisation of large flow gradients and lead to smoother flow fields. • A sufficiently large value for ρ is very successful in rendering the method robust under noise. • in flat regions where the image gradient vanishes, the problem become again undetermined. • At locations with |∇ f| ≈ 0, no reliable local flow estimate is possible, but the regulariser | ∇u|2 + |∇v|2 fills in information from the neighbourhood - the filling-in effect. • Coarse-to-fine multi-resolution approach

  5. Predictive understanding of the underlying processes of the pellet-plasma interaction • Recent investigations revealed that pellet ablation is a complex 3D process taking place on the μs timescale • pellet cloud dynamics (expansion, instabilities and drifts) • Analysis of pellet cloud dynamics and drifts by observing the visible radiation with fast framing cameras and by applying image processing algorithms Experiments with sophisticated diagnostic settings performed during the 2011 campaign of AUG __________________________________________________________________________________ *detailed results will be presented at EPS conference (G. Kocsis et al)

  6. Determination of ice extrusion velocity by optical flow method Illustration of optical flow calculations Image sequences provided by a CCD camera viewing the ice at the exit of the nozzles of the extrusion cryostat showing the extruded deuterium ice in case of JET pulse #76379 Line profiles through the images and its reconstruction (bottom)

  7. Statistical redundancies in both temporal and spatial directions: MPEG-2 compressed space (P)- Predicted frames are coded with forward motion compensation, using the nearest previous reference (of type I or P) images. • inter-pixel correlation • simple translation motion between consecutive frames • Motion is represented by a field of motion vectors (MV) • one MV per macroblock (B) - Bi-directional frames are also motion compensated, this time with respect to both past and future reference frames. • (I)- Intrinsic frames - coded using only information present in the picture itself by discretecosine transform (DCT) • Processing at the level of MB8 blocks • DCT concentrates the energy into the low-frequency coefficients • (spatial redundancy) • The parts of the image that do not change significantly are simply copied from other areas or other frames. • In case of the other parts, for each MB16, the best matching block is searched in the reference frame(s). • neglecting the low value coefficients • High-frequency coefficients are more coarsely quantized than the low-frequency coefficients Encoding is implemented using MB14 macroblocks

  8. MV field used: • as a crude initial estimation for optical flow recovery • for image segmentation • weighted averages of the image gradients can be expressed using DCT coefficients: • Confidence measure • to ensure that the MV field is meaningful Assumption: areas with strong edges exhibit better correlation with real motion than textureless ones • eigenvalue decomposition • size of the eigenvalue is a measure of uncertainty in the direction of the corresponding eigenvector • (the stronger the eigenvalue, the lower the error variance)

  9. Computing time Error estimation • Peak signal-to-noise ratio (PSNR) of the residual image > 14 dB • The difference between the speeds of the different pellets in the same ribbon structure below 12.5%. • Total optical flow computation time: • ~16.4 ms. • Image acquisition framing rate: • 50 Hz • Optical flow evaluation can be engrafted in MPEG compressing routines in case of real time estimation of the speed of moving objects.

  10. Tracking of plasma instabilities MARFEs can reduce confinement leading to harmful disruption → a risk for the integrity of the devices MARFEs determine a significant increase in impurity radiation → a clear signature in the video data

  11. Phase congruency Mach bands Visually discernable features coincide with those points where the Fourier waves, at different frequencies, have congruent phases Extraction of highly informative features at points of high PC black – measued luminance red–brightnesses as perceived Lateral inhibition  vs. Pahse congruency M.C. Morrone et al., Mach bands are phase dependent, Nature 324(1986)250. construction of PC from the Fourier components

  12. t Approximations of F and G by convolving the signal with a quadrature pair of filters (linear-phase filters for phase information preservation) Gabor filters with different frequencies and orientations • symmetric/antisymmetric quadrature pairs of nonorthogonal wavelets An appropriate choice for constructing the symmetric/antisymmetric quadrature pairs of filters are the Gabor filters. S.N.Prasad, J.Domke, Gabor Filter Visualization,Technical Report, University of Maryland (2005) Response for Gabor filter oriented vertically

  13. Combine all the orientations SIM map, pooled into a single similarity score • 96.2% were correctly interpreted. From the misclassified events 0.03% were false positives and 3.5% false negatives.

  14. Sparse learned representations of video images • Sparse image representation Since images are usually large, the decomposition is implemented on overlapping patches instead of whole images. • D – fixed, general (DCT, wavelet) or it can be adapted to suit the application domain. • Learning both D and a in an efficient way has been the focus of much of recent published work. • Matching pursuit algorithm • first find the one D atom that has the biggest inner product with the signal • then subtract the contribution due to that atom • repeat the process until the signal is satisfactorily decomposed • D initialized from random patches of natural images. • Then learned adaptively from the data such that the decomposition is sparse: Each patch written as a column vector Sparsity: - counting the number of non-zero elements in a vector

  15. Decomposition error: • N different classes Si of signals • learn separate dictionaries, one per class • a signal belonging to one class is reconstructed poorly by a dictionary corresponding to another class. • classification is performed by using residual reconstruction errors of a signal by the dictionary belonging to a class • as a discriminative operator for classification. Limited results: ~ 85 % success classification rate

  16. ‘Good’ for one class ‘bad’ for the orher by incorporating discriminative components: Learning discriminative dictionaries Dictionary incoherence term Encourages dictionaries associated to different classes to be as independent as possible, while still allowing for different classes to share features. Improved (preliminary) results: ~ 92 % success clasificationrate • Further tuning adjustable parameters – mainly size of patches multiscale • Multiscale framework to capture first a global appearance of objects • atoms representing common features in all classes tend to appear repeated almost exactly in dictionaries corresponding to different classes • False similar reconstruction decomposition error • Detect by inspecting the inner product of dictionary atoms. • Threshold for controlling the sharing atoms

  17. Image retrieval • Bag-of-words model - represention of a ‘sentence’ as an unordered collection of words, disregarding grammar and even word order. • Transforms an image into a large collection of feature vectors invariant to: • image translation • scaling • rotation • illumination changes • local geometric distortion • algorithms to detect and describe local features in images: • MSER (Efficient Maximally Stable Extremal Region) • extremal region

  18. Component tree • Rooted, connected tree constructed by successive thresholdings taking into account hierarchic image inclusion • maximally stable regions are those regions which have approximately the same region size across 2Δ neighboring threshold images Features calculated:mean gray value, region size, center of mass, width, dimension of the bounding box (weights for the features can be used to adapt to different kinds of input data). Matching criteria: smallest Euclidean distance between feature vectors

  19. Reference image • MSER example Image identification using maximally stable extremal regions Various MARFE images

  20. Conclusions • Optical flow method for the study of several fusion plasma relevant issues, able to provide the real velocity for objects moving close to structures. • MPEG Motion segmentation - a key contrivance to allow very fast optical flow estimation • Application to pellet injection and pellet dynamic • Phase congruency as a highly localized operator for automatic MARFE identification with a good prediction rate. • Sparse image representation for disruption prediction. Encouraging preliminary results. Improvements expected mainly from a more efficient definition and implementation of the discriminative operator. • Image retrieval by image local feature detection.

More Related