1 / 32

Montage: An Astronomical Image Mosaic Service for the NVO

Montage: An Astronomical Image Mosaic Service for the NVO . Anastasia C. Laity, IPAC Nate Anagnostou, IPAC Bruce Berriman, IPAC John Good, IPAC Joseph C. Jacob, JPL Daniel S. Katz, JPL Thomas Prince, CIT. Focus Session. Introduction Montage Availability New Features

xenos
Download Presentation

Montage: An Astronomical Image Mosaic Service for the NVO

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Montage: An Astronomical Image Mosaic Service for the NVO Anastasia C. Laity, IPAC Nate Anagnostou, IPAC Bruce Berriman, IPAC John Good, IPAC Joseph C. Jacob, JPL Daniel S. Katz, JPL Thomas Prince, CIT

  2. Focus Session • Introduction • Montage Availability • New Features • Walk-through: 3-color 2MASS/MSX mosaic • Montage science applications: SWIRE case study • Questions and Discussion

  3. Intro to Montage • Software toolkit to generate astronomical image mosaics • User specifies size, rotation, WCS-compliant projection and coordinate system • Background modeling and rectification capabilities • Portable and highly parallelizable to run in multi-processor or grid environments

  4. Montage Availability • Download from http://montage.ipac.caltech.edu • Full user guide on website, including detailed API • Easy installation: includes copies of required libraries, so a single “make” command builds all of Montage.

  5. Features of v2.2 • Improvements to computational algorithms: • Fast reprojection between tangent-plane projections • Coaddition of arbitrarily large files • Creation of mosaics on computational grids or clusters. Supports two instances of parallel computing technology: • Message Passing Interface (MPI) • Planning for Execution in Grids (Pegasus)

  6. 2MASS/MSX Mosaic MSX A-band: 1 image (retrieved from IRSA’s MSX server), CAR projection 2MASS: 170 images, SIN projection l=345.2, b=1.24 2.4 square degrees

  7. l=345.2, b=1.24 2.4 square degrees Red: MSX A (8.28 m) Green: 2MASS K (2.17 m) Blue: 2MASS J (1.25 m)

  8. Fast Reprojection • mProjectPP: Fast reprojection between tangent-plane projections (i.e., SIN to SIN, SIN to TAN) • Based on Mopex algorithm (in collaboration with Spitzer Science Center) • Uses plane-to-plane solutions instead of projecting input/output to celestial sphere and calculating overlap on sky • Roughly 20x speed-up for 2MASS Atlas images • Only applicable to tangent-plane projections, however…

  9. TAN Header Simulation • Many other projections can be approximated by a TAN header with distortion parameters • mTANHdr analyzes a FITS header (in any projection) and determines if there is an equivalent, distorted-TAN projection within a specified tolerance • Outputs a distorted-TAN header template for use with mProjectPP to speed up non-TAN transformations

  10. Mosaic Workflow • Create tables of image metadata (WCS, FITS geometry) for Montage modules to read: • Make header template to fit 2MASS images completely: >mImgtbl raw_K raw_K.tbl [struct stat="OK", count=170, failed=0, nooverlap=0] >mImgtbl raw_J raw_J.tbl [struct stat="OK", count=170, failed=0, nooverlap=0] >mImgtbl raw_MSX raw_MSX.tbl [struct stat="OK", count=1, failed=0, nooverlap=0] >mMakeHdr raw_K.tbl template.hdr [struct stat="OK", count=170, clon=254.587292, clat=-40.251753, lonsize=2.353611, latsize=2.450000, posang=359.891421, lon1=256.154189, lat1=-41.468162, lon2=253.014309, lat2=-41.463621, lon3=253.076184, lat3=-39.014964, lon4=256.104469, lat4=-39.019343]

  11. Mosaic Workflow • Create distorted-TAN header for MSX data: >mGetHdr raw_MSX/msx_4deg.fits msx.hdr [struct stat="OK", ncard=23] >mTANHdr -c eq msx.hdr msxtan.hdr [struct stat="OK", fwdxerr=0.00351429, fwdyerr=0.00546297, fwditer=51, revxerr=0.00335636, revyerr=0.0382581, reviter=9] Original Header (msx.hdr): Alternate Header (msxtan.hdr): CTYPE1 = 'RA---TAN-SIP' CTYPE2 = 'DEC--TAN-SIP' CRVAL1 = 254.9200850763 CRVAL2 = -40.4340776489 A_ORDER = 3 A_0_0 = -6.700e-05 A_0_1 = 7.696e-11 A_0_2 = -1.725e-15 A_0_3 = -7.897e-20 A_1_0 = -1.319e-07 A_1_1 = -2.746e-14 A_1_2 = -8.749e-19 A_1_3 = -1.804e-17 A_2_0 = -4.473e-11 A_2_1 = -1.076e-19 CRVAL1 = 345.199402 CTYPE1 = 'GLON-CAR' CRVAL2 = 1.24101007 CTYPE2 = 'GLAT-CAR' CROTA2 = 0.000000000

  12. Mosaic Workflow • Reproject 2MASS images: • Reproject MSX image: • New metadata tables (new geometry post-reprojection): >mProjExec -f -p raw_K raw_K.tbl template.hdr proj_K stats_K.tbl [struct stat="OK", count=170, failed=0, nooverlap=0] >mProjExec -f -p raw_J raw_J.tbl template.hdr proj_J stats_J.tbl [struct stat="OK", count=170, failed=0, nooverlap=0] >mProjectPP -i msxtan.hdr raw_MSX/msx_4deg.fits final_MSX.fits template.hdr [struct stat="OK", time=6082] >mImgtbl proj_K proj_K.tbl [struct stat="OK", count=170, badfits=0] >mImgtbl proj_J proj_J.tbl [struct stat="OK", count=170, badfits=0] >mImgtbl proj_MSX proj_MSX.tbl [struct stat="OK", count=0, badfits=0]

  13. Mosaic Workflow • Background rectification Before background rectification: 1. Which files overlap each other? >mOverlaps proj_K.tbl diff_K.tbl [struct stat="OK", count=454] 2. Create “difference” images of overlap regions: >mDiffExec -p proj_K diff_K.tbl template.hdr diff_K [struct stat="OK", count=454, failed=0] 3. Fit planes to difference images: >mFitExec diff_K.tbl fits_K.tbl diff_K [struct stat="OK", count=454, failed=0, warning=0, missing=0] 4. Calculate plane to be removed from each image: >mBgModel proj_K.tbl fits_K.tbl corrections_K.tbl [struct stat="OK"] 5. Subtract planar backgrounds from images: >mBgExec -p proj_K proj_K.tbl corrections_K.tbl corr_K [struct stat="OK", count=170, nocorrection=0, failed=0] After background rectification:

  14. Mosaic Workflow • Coaddition of final 2MASS mosaics >mAdd -e -p corr_K proj_K.tbl template.hdr final_K.fits [struct stat="OK", time=144] >mAdd -e -p corr_J proj_J.tbl template.hdr final_J.fits [struct stat="OK", time=144] J Band K Band

  15. Mosaic Workflow • Crop out edges • Subsample to manageable size for presentation >mSubimage -p final_K.fits final_K_crop.fits 420 882 7633 7497 [struct stat=“OK”] >mSubimage -p final_J.fits final_J_crop.fits 420 882 7633 7497 [struct stat=“OK”] >mSubimage -p final_MSX.fits final_MSX_crop.fits 420 882 7633 7497 [struct stat=“OK”] >mShrink final_K_crop.fits final_K_crop_8.fits 8 [struct stat=“OK”] >mShrink final_K_crop.fits final_K_crop_8.fits 8 [struct stat=“OK”] >mShrink final_K_crop.fits final_K_crop_8.fits 8 [struct stat=“OK”]

  16. Final mosaics J K MSX

  17. 3-Color JPEG • mJPEG: command-line JPG generator • Can find starting-point ranges using Oasis • Tweak color stretch until…. >mJPEG -red final_MSX_crop_4.fits 0% 99.95% 2 \ -green final_K_crop_4.fits 0% 99.3% 2 \ -blue final_J_crop_4.fits 0% 99.4% 2 \ -out jpeg/r99.95_g99.3_b99.4_crop_4.jpg [struct stat=“OK”]

  18. Case Study: SWIRE • Spitzer Wide-area InfraRed Extragalactic Survey (SWIRE) • Discovery of new galaxies with redshift z~3 • Supporting observations using ground- and space-based telescopes • Different telescopes and image parameters (rotation, projection, pixel scales) • Data for slides provided by SWIRE team; mosaicking by Anastasia Alexov and John Good

  19. SWIRE Tiling Scheme • Common tiling scheme based on Spitzer pipeline-processed, mosaicked data • Need ancillary data transformed to same tiling scheme and image parameters as Spitzer data • Trivial to overlay data • Basis for multi-wavelength source extraction • Elais N1 Field: • Backdrop: ISSA • Large yellow box: outline of Spitzer data • Large green box: optical data coverage area • Other footprints: various ancillary data • Raw optical stats: • La Palma observatory, 2.5m Isaac Newton telescope • 270 images [6229 x 6203 pix], ~0.00009 arcsec/pixel • Images covered 15 of these 17 Spitzer tiles • 5 optical bands (g,i,r,u,z) [54 images per band] • 15 tiles x 5 bands = 75 mosaics (results) • 1 to 14 optical images covering any given tile • Mosaicking: • Input: TAN-TAN projection at 1 degree rotation • Output: TAN-TAN projection at 315 degree rotation

  20. SWIRE Mosaic Processing • Optical FITS files are already a mosaic of 4 CCDs, reduced by observer • For Montage, treat like 4 separate images • mProjectPP can use sections as input

  21. SWIRE Workflow • mProjectPP: project each slice of each image • mFlattenExec: bring all images to the same base flux level • mAdd: create mosaic of all the slices • mShrink: create version of mosaic 10 times smaller as browse product

  22. Resulting Mosaic • Final products: mosaic of optical data corresponding to each tile • Pictured: i-band, tile 2_3, shrunk by factor of 100

  23. 3-Color Mosaic • 3-color mosaic of tile 2_3 • Common tiling scheme allows overlays • Spitzer IRAC channel 1 (3.6 um) is green; Spitzer IRAC channel 2 (4.5 um) is red; i-band optical data is blue

  24. Full-Resolution • Can see all the high redshift non-stellar objects

  25. Q&A / Discussion ?

  26. Parallelization

  27. Pegasus Implementation • Pegasus • Developed at Information Sciences Institute (ISI), USC • Transforms “abstract workflows” into “concrete workflows” to be executed on a computational grid (Condor-G) • http://pegasus.isi.edu • Running Pegasus version of Montage requires some additional Montage modules (available on request)

  28. MPI Implementation • Writing code in MPI: • Author is responsible for figuring out the details of communication using one-to-one and all-to-all communications • Examples: • Send 10 floats from array val to processor 1 (with tag 0) • Receive 10 floats from any processor (with any tag) and store them in array in • Globally sum of all processors’ version of floating point variable x; store result in all processors’ version of variable gx • MPI is the standard that defines a specification for message passing • Examples: • ReturnCode = MPI_Send(val, 10, MPI_FLOAT, 1 , 0, MPI_COMM_WORLD) • ReturnCode = MPI_Recv(in, 10, MPI_FLOAT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status) • ReturnCode = MPI_Allreduce(x, gx, 1, MPI_FLOAT, MPI_SUM, MPI_COMM_WORLD) • MPI versions of parallelizable Montage modules available on request

More Related