Automated image registration for the future
This presentation is the property of its rightful owner.
Sponsored Links
1 / 35

Automated Image Registration for the Future PowerPoint PPT Presentation


  • 110 Views
  • Uploaded on
  • Presentation posted in: General

Automated Image Registration for the Future. Warren J. Hack Space Telescope Science Institute. Image Registration. Image registration represents one of the primary problems facing new archives of astronomical data, especially in the era of VO operations.

Download Presentation

Automated Image Registration for the Future

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Automated image registration for the future

Automated Image Registration for the Future

Warren J. Hack

Space Telescope Science Institute


Image registration

Image Registration

  • Image registration represents one of the primary problems facing new archives of astronomical data, especially in the era of VO operations.

    • Being able to accurately combine data or even just co-align data from different sources or times can greatly expand the information available for analysis.

    • Many techniques available to date either require significant manual intervention to achieve good registration, work only on limited types of observations (point source vs extended), or both.

  • Image registration, in this context, simply refers to image to image alignment. (i.e., relative astrometry, not absolute).

    • Absolute astrometry can be assured afterwards

ADASS 2007


Absolute astrometry an aside

Absolute Astrometry:an aside

  • As seen, the astrometry.net web site provides an easy to use interface for determining the absolute astrometry for a given image.

    • Only limited by the available astrometric standards in the sky

    • Astrometric accuracies still not always good enough for high-resolution imaging, such as Hubble Space Telescope (HST) imaging

    • High-resolution and/or narrow-field imaging, such as HST Advanced Camera for Surveys High Resolution Camera (ACS/HRC) images, fall in between most currently available standards

    • Work on-going to provide finer grid of standards to support increasing amount of high-resolution imaging

ADASS 2007


Scope of the problem

Scope of the Problem

Courtesy: HLA (http://hla.stsci.edu)

Courtesy: Daniel Durand (CADC)

“Walking” and “Deep” Super-association footprints

  • The advent of large-scale archives of astronomical images fuels the development of virtual observatory capabilities.

    • How do you combine all the data taken at one observatory into the deepest, most complete set of products possible?

    • This can not be done manually given the volume of available data.

  • Footprint services illustrate how multiple images could be combined to provide an expanded view of a region.

“Mosaic” Super-association footprint

ADASS 2007


Scope of the problem1

Scope of the Problem

  • Tools for comparing results across telescopes provide new opportunities for scientific discovery.

    • How can you take data from one source and align it automatically with data from another source given possibly dramatically different resolutions, fields of view and astrometric uncertainties?

  • These issues represent the basic problems that require automated image registration as part of the solution.

ADASS 2007


Algorithm research

Algorithm Research

  • A literature search in ADS for all papers published in 2004 and later turned up:

    • 2 papers describing any type of image registration algorithm development for use with deep-sky astronomical imaging

    • 5 papers describing image registration algorithm development for use with planetary or geo-science observations

  • In addition, dozens of papers on image registration techniques were found in medical imaging, Earth-observing geoscience, and computer vision dedicated literature.

  • The burning question: can any of those algorithms be used for astronomy?

ADASS 2007


Image registration methods

Image Registration Methods

  • A recent survey of image registration methods was published by B. Zitová and J. Flusser (2003, Image and Vision Computing, 21,977).

    • This paper provides a good reference for anyone involved in image registration, regardless of field; such as medical imaging, remote sensing, or computer vision… or astronomy.

  • They recognized two basic categories of algorithms:

    • Area based methods: cross-correlation, maximum likelihood, …

    • Feature based methods: SExtractor, daofind, …

ADASS 2007


Current techniques in astronomy

Current techniques in astronomy

  • A relatively few techniques have been developed and widely accepted for image registration in astronomy, with each tuned to work best for a different set observations; primarily:

    • Cross-correlation(area-based method)

    • Catalog Matching (feature-based method)

    • Multi-resolution matching (generally used with feature-based methods)

  • Each technique represents an implementation of a single type of feature detection method, area based or feature based, with each type having its own strengths, as well as, weaknesses.

ADASS 2007


Current techniques in astronomy1

Current techniques in astronomy

Cross-correlation: This technique works best for aligning images containing primarily indistinct objects such as nebulae, galaxies, unresolved clusters and so on.

  • Examples: IRAF’s crosscorr

  • Disadvantages:

    • Does not work efficiently on images which are rotated relative to each other

    • can be very memory intensive

    • Can be fooled by variability of objects from one exposure to the next

ADASS 2007


Current techniques in astronomy2

Current techniques in astronomy

Catalog matching: This technique was developed to provide highly accurate alignment of sources by matching cross-identified source positions between exposures. Sources positions are determined for each object using any number of techniques, PSF fitting for stars and/or isophote fitting for extended sources.

  • Examples: SExtractor or daofind with IRAF’s xyxymatch

  • Disadvantages:

    • Will not work for extended sources that fill the field of view

    • Can be fooled by large offsets, rotations, cosmic-rays, and variability of the sources

    • Accuracy relies on source position determination accuracy, both in terms of image position and cross-identification of same source from one image to the next

    • Many times, manual tuning of many parameters are necessary to obtain the best identification of sources with differences depending on the types of sources in the image, as experienced by CADC in trying to align data for their ‘super-associations’.

ADASS 2007


Current techniques in astronomy3

Current techniques in astronomy

Multi-resolution matching (wavelets): Wavelets provide a means of sampling each exposure at different resolutions to identify and later match sources, as demonstrated and described by B. Vandame (2002, Astronomical Data Analysis II, Proc. of SPIE, 4847:123).

  • Examples: ESO Imaging Survey (EIS) pipeline (Vandame, 2002)

  • Disadvantages:

    • Cosmic-rays are difficult (impossible) to detect and ignore in each exposure without considerable pre-processing and wavelets actually enhance their signature confusing the matching process

    • Accuracy when working on large extended sources remains uncertain (how do you find the center of a large, indistinct blob?)

ADASS 2007


Looking for something new

Looking for something new

  • The commonly accepted techniques all fail to provide the generality needed for wide-spread automated use.

  • Zitová and Flusser describe many techniques used by other fields. Determining what would be useful or sufficiently different represents the initial step to identifying a possible solution. They summarize that:

    • “Feature based matching methods are typically applied when the local structural information is more significant than the information carried by the image intensities.” This would apply most directly to point-sources where positions don’t change while intensities could be different.

    • “Area-based methods are preferably applied when the images have not many prominent details and the distinctive information is provided by the graylevels/colors rather than by local shapes and structure.” Large extended sources like Orion fall would require area-based techniques.

  • The solution must lie in finding a way to combine both types of methods, something which Zitová and Flusser recognized with their summary: “In the future, the idea of an ultimate registration method…will be based on the combination of various approaches, looking for consensus of particular results.”

ADASS 2007


An integrated approach

An integrated approach

  • The need to use both feature-based and area-based methods in concert to handle all types of astronomical images makes current techniques infeasible.

  • Research of possible techniques from other fields led to the development of the Multi-resolution Image Registration Algorithm (MIRA) for initial use with HST images.

  • MIRA relies on a combination of techniques derived from

    • astronomical image analysis

    • Earth observation satellite image analysis as performed in the geosciences.

  • The algorithm relies on:

    • multi-resolution analysis for preparing the images to take advantage of the area-based information in each image (Vandame, B., 2002, in Astronomical Data Analysis II, Proc. SPIE, 4847, 123)

    • Area-based algorithms for extracting the sources (Huertas, A., and Medioni, G, 1986, IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-8 (No. 5), 651)

    • feature-based registration algorithm for identifying the sources. (Dai, X., & Khorram, S., 1999, IEEE Trans. On GeoScience And Remote Sensing, 37, 2351)

ADASS 2007


Mira algorithm overview

MIRA Algorithm Overview

Computation of the shifts for a set of images involves the following steps:

  • Determine initial relative astrometry by reading in WCS values for input exposures and removing distortion from images using PyDrizzle (Hack, W.J., 2002, ADASS XI, ASP Conference Series, 281, 197).

  • Generation of multi-resolution views of each input image

  • Sources are detected and quantified in each input image using multi-resolution views of each image, edge-detection for the source identification and chain-codes and invariant moments for quantification of the sources.

  • Compute distance matrices based on chain-codes, invariant moments and center-of-gravity (mean position) at lowest resolution for all sources between reference image and each subsequent image

  • Select matching pairs which meet the matching criteria in all 3 matrices.

  • Perform fit for shift, and possibly rotation and scale using selected pairs.

  • Iterate on solution for successively higher resolutions to refine fit.

ADASS 2007


Multi resolution usage

Multi-resolution Usage

  • Each exposure contains information on many different scales,

    • Used to constrain the cross-identification of sources from one exposure to the next.

  • Wavelet transforms, and in particular algorithm à trous(Vandame, 2002), provide the most common transformation to build views of an exposure at increasingly lower resolutions.

  • Multi-resolution views of the image get created using median filtering, doubling the kernel size for each subsequent view.

    • A median filter replaces the wavelet transformation in our algorithm to avoid enhancing cosmic-rays.

ADASS 2007


Multi resolution usage1

Multi-resolution usage

ADASS 2007


Multi resolution usage2

Multi-resolution usage

  • Those sources which exhibit the greatest signal (sources with red and dark green in previous figure) at the lowest resolution (’30 Pixel smoothing’ in previous figure) would be identified as objects for matching.

    • This eliminates any confusion from weaker targets, while reducing the confusion generated from crowded fields.

  • The pixel area covered by these objects would then be examined at successively higher resolutions for the positions of all sources within the region to refine the positions of the target(s) in that region.

    • These positions would then be used for matching images taken at different positions, times and/or orientations.

ADASS 2007


Algorithm contour extraction

Algorithm – Contour extraction

  • The images in each column represent two different observations of a section of the Carina Nebula (NGC3372).

  • The top row shows the low resolution, distortion-corrected image.

  • The objects edges are highlighted in the second row using a Laplacian-of-Gaussian filter.

  • The contours for all edges in the image are extracted as shown in the third row.

  • The weak edged contours get eliminated iteratively using the Thin-and-Robust Zero Crossing algorithm until only the significant contours are left in the bottom row.

  • The contour for a single object has been highlighted in each column to illustrate that the same contour gets generated for a target from one exposure to the next.

Algorithms from: Dai, X., & Khorram, S., 1999, IEEE Trans. On GeoScience And Remote Sensing, 37, 2351.

ADASS 2007


Algorithm chain code encoding

Algorithm – Chain-code encoding

  • This figure illustrates the processing of a single contour from each image.

  • The contour for this example is highlighted in the top row.

  • The subsequent rows illustrate the results of computing the modified Freeman chain-code for each contour. (Li, H., et.al., 1995, IEEE Trans. Image Processing, 4(No. 3), 320)

  • These chain codes are invariant to scale, rotation, and translations as well as being robust against noise in the contours, as seen in the bottom row.

Figure from: Dai, X., & Khorram, S., 1999, IEEE Trans. On GeoScience And Remote Sensing, 37, 2351.

ADASS 2007


Algorithm invariant moments

Algorithm –Invariant moments

  • The moment invariants describe planar shapes in a way which is translational, rotational, scale and reflection independent.

  • The center of gravity is the defined as the first moment divided by the zero-th moment (or object area), for example.

  • Second order moments describe the distribution of mass (intensity) around the center of gravity.

  • This method uses these 7 invariant moments.

From: Dai, X., & Khorram, S., 1999, IEEE Trans. On GeoScience And Remote Sensing, 37, 2351.

ADASS 2007


Implementation of mira

Implementation of MIRA

  • The chain-codes and moments for each source can then be used to cross-match sources between images.

  • Once sources are successfully identified, then a fit can be performed to determine the offset, and if desired rotations and scale changes as well.

    • Technically, a full distortion model could be determined as well if it wasn’t taken out initially, but our initial implementation assumes adequate distortion calibration and removal.

  • This initial fit then gets refined by using the object positions from higher resolution views.

ADASS 2007


Verification of mira algorithm

Verification of MIRA Algorithm

  • Initial verification of MIRA included running it on pairs of images containing vastly different types of sources; specifically,

    • an extended source which filled the field of view

      • HST ACS/WFC observations of the Orion Nebula

    • a crowded field of point sources

      • HST ACS/WFC observations of 47Tucanae (globular cluster) taken at different orientations

    • an extended source with variability

      • HST STIS observations of M87 (elliptical galaxy with jet)

  • PyDrizzle was used to generate distortion-free images for determination of the offsets and to interpret the WCS information to determine the rough overlap between the images

ADASS 2007


Verification orion

Verification – Orion

  • Initial distortion-free mosaic of each multi-chip images showing overlap between successive exposures

    • Initial alignment in full output frame based solely on WCS for each image

ADASS 2007


Verification orion1

Verification -- Orion

  • MIRA was then used to compute the offsets between the images in each pair with no user provided parameters except the input filenames.

  • The offset, or error in header WCS values, computed for the Orion images using MIRA was (Dx,Dy)=(15.18, -0.97) pixels

  • The computed offsets were then used to combine the images again using PyDrizzle.

    • A set of RGB images was generated for each pair using the alignment derived from the header WCS information only, and then after applying the shift found by MIRA.

    • One image is blue, the other is red.

ADASS 2007


Verification orion wcs

Verification – Orion (WCS)

Residual blue and red ‘sources’ are simply cosmic-rays from each input image.

ADASS 2007


Verification orion mira aligned

Verification – Orion (MIRA aligned)

Residual blue and red ‘sources’ are simply cosmic-rays from each input image.

ADASS 2007


Verification 47 tuc

Verification – 47 Tuc

  • MIRA was then used to compute the offsets between the images in each pair with no user provided parameters except the input filenames.

  • The offset, or error in header WCS values, computed by MIRA was (Dx,Dy)=(81.37, 31.34) pixels

    • average separation between stars: ~50 pixels.

  • The computed offsets were then used to combine the images again using PyDrizzle.

    • A new set of RGB images was generated for each pair in exactly the same way as used for the Orion test images.

ADASS 2007


Verification 47 tuc wcs

Verification – 47 Tuc (WCS)

  • Alignment based on original WCS header information: Blue – pointing 1 Red – Pointing 2

ADASS 2007


Verification 47 tuc mira aligned

Verification – 47 Tuc (MIRA aligned)

  • Alignment based on shifts from MIRA:Blue – pointing 1 Red – Pointing 2

ADASS 2007


Verification m87

Verification – M87

  • This algorithm successfully computed the shifts for HST/STIS images

    • M87 jet with photometric variations along the jet

    • Cross-correlation got confused by variations

    • Not enough point (or point-like) sources to use catalog matching

    • Default parameter settings (same as used for Orion and 47Tuc test cases) were sufficient for MIRA to get correct alignment.

ADASS 2007


Verification m871

Verification – M87

‘knot’

‘knot’

Core

Core

  • The features on the left in each image (circled in red) drive any registration effort due to the point-like nature of the core. However, the first ‘knot’ changes intensity causing confusion for cross-correlation.

  • MIRA, on the other hand, was not confused at all and returned the correct shifts using only default settings.

Images courtesy of: Juan Madrid (STScI)(Madrid, J., et.al., 2007, Astrophys. Space Sci., (preprint))

ADASS 2007


Super association testing

‘Super-association’ testing

  • A couple of ‘super-associations’ have been provided by the Canadian Astronomy Data Centre (CADC) for use in testing this algorithm.

    • ‘super-associations’ are sets of images taken of the same region in the sky at different epochs (visits) with different guide stars for pointing.

  • A good fit was obtained for at least one super-association using only default parameter settings (same as Orion, 47Tuc, and STIS test cases).

ADASS 2007


Mira status

MIRA Status

  • The current version of the task demonstrates some problems which will need to be resolved prior to wide-scale use. For example,

    • Memory usage still needs work to allow it to run on arbitrary numbers of input images.

    • Work also needs to be done on the code to handle ‘walking mosaics’, where some images don’t overlap at all. The problems come from how to build up the reference frame from the detected sources, not from the finding or matching algorithms themselves.

  • Work to develop MIRA for wide-spread use has been undertaken at STScI in an attempt to address the alignment problems faced by the CADC and by the Hubble Legacy Archive (HLA), as well as for stand-alone use by researchers using MultiDrizzle to combine and clean their own observations.

ADASS 2007


Summary

Summary

  • MIRA, by nature of being able to work without user parameters, seems suitable for completely automated use regardless of the types of objects in the images.

  • MIRA exemplifies what can happen when new algorithms developed in other fields are applied to astronomical imaging to provide new, more robust solutions to problems being faced today.

  • Other algorithms show potential as well, such as Multiresolutional Critical-Point Filters. (Shinagawa, Y., Kunii, T.L., 1998, IEEE Trans. Pattern Analysis and Machine Intell., 20, 994)

  • New solutions, like MIRA (and MONTAGE and IPAC?), will be needed soon to address the problems faced by the ever expanding archives of astronomical data in an automated way.

  • It is hoped that MIRA will serve as an example of a new way to meet the image registration needs of the astronomical community.

ADASS 2007


Acknowledgements

Acknowledgements

  • This project continues thanks to the efforts of Nadia Dencheva (STScI).

  • This project also wants to thank Daniel Durand (CADC) for providing test cases, for evaluating initial versions of this algorithm in real-world situations, and for providing useful feedback on the successes and (more importantly) the failures of the tests.

  • Test data used in this development has also been graciously provided by J. Madrid (STScI), V. Platais (STScI).

ADASS 2007


  • Login