1 / 12

APOGEE-2 Data Infrastructure

APOGEE-2 Data Infrastructure. Jon Holtzman (NMSU) APOGEE team. Dat a infrastructure. Data infrastructure for APOGEE-2 will be similar to that of APOGEE-1, generalized to multiple observatories, and with improved tracking of processing

hien
Download Presentation

APOGEE-2 Data Infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. APOGEE-2 Data Infrastructure Jon Holtzman (NMSU) APOGEE team

  2. Data infrastructure • Data infrastructure for APOGEE-2 will be similar to that of APOGEE-1, generalized to multiple observatories, and with improved tracking of processing • APOGEE raw data and data products are stored on the Science Archive Server (SAS) • Reduction and analysis software is (mostly) managed through the SDSS SVN repository • Raw and reduced data described (mostly) through SDSS datamodel • Data and processing documented via SDSS web pages and technical papers

  3. Raw data • APOGEE instrument reads continuously (every ~10s) as data are accumulating, 3 chips at 2048x2048 each • Raw data are stored on instrument control computer (current capacity is several weeks of data) • Individual readouts are “annotated” with information from telescope and stored on “analysis” computer (current capacity is several months). These frames are archived to local disks that are “shelved” at APO (currently 20 x 3TB disks) • “quick reduction” software at observatory assembles data into data cubes and compresses (lossless) for archiving on SAS • Maximum daily compressed data volume ~ 60 Gb

  4. Raw data Does not include NMSU 1m + APOGEE data LCO data will be concurrent Total 2.5m raw data to date: ~11 TB

  5. Initial processing • “quick reduction” software estimates S/N (at H=12.2) which is inserted into plate database for use with autoscheduling decisions • APOGEE-1 • Data transferred to SAS next day, transferred to NMSU later that day, processed with full pipeline following day, updated S/N loaded into platedb • APOGEE-2 proposal: • Process data at observatory with full pipeline next day, and/or • Improve “quick reduction” S/N

  6. Pipeline processing • Three main stages (+1 post-processing) • APRED : processing of individual visits (multiple exposures at different detector spectral dither positions) into visit-combined spectra, with initial RV estimates. Can be done daily • APSTAR: combine multiple visits into combined spectra, with final RV determination. • For APOGEE-1, has been run annually (DR10: year 1, DR11: year 1+year2) • ASPCAP: process combined (or resampled visit) spectra through stellar parameters and chemical abundances pipeline • For APOGEE-1, has been run 3 times • ASPCAP/RESULTS: apply calibration relations to derived parameters, set flag values for these

  7. APOGEE data products • Exposures (maybe not of general interest?) • Data cubes (apR) • 2D images (ap2D) • Extracted spectra (ap1D) • Sky subtracted and telluric corrected (apCframe) • Visit spectra • Combine multiple exposures at different dither positions • apVisit files: native wavelength scale, but with wavelength array • Combined spectra • Combine multiple visits, requires relative RVs • apStar files: resampled spectra to log(lambda) scale • Derived products from spectra • Radial velocities and scatter from multiple measurements (done during combination) • Stellar parameters/chemical abundances from best-fitting template • Parameters: Teff, log g, microturbulence (fixed), [M/H], [alpha/M], [C/M], [N/M] • Abundances for 15 individual elements • aspcapStar files: stellar parameters of best-fit, pseudo-continuum normalized spectra and best fiitting templates

  8. APOGEE data volume • Raw data: • 2.5m+APOGEE: ~4 TB/year APOGEE-1  ~6 TB/year with MaNGA co-observing • 1m+APOGEE: ~2 TB/year • LCO+APOGEE: ~3 TB / year • TOTAL APOGEE-1 + APOGEE-2 : ~75 TB • Processed visit files: ~ 3 TB/year (80% individual exposure reductions) • Processed combined star files: ~500 GB/100,000 stars • Processed ASPCAP files: raw FERRE files ~500 GB/100,000 stars • Bundled output: ~100 GB / 100,000 stars • TOTAL APOGEE-1 + APOGEE-2 (one reduction!): ~ 40 TB

  9. APOGEE data access “Flat files” available via SDSS SAS: all intermediate and final data product files summary ``wrap-up” files (catalog) “Catalog files” available via SDSS CAS: apogeeVisit, apogeeStar, aspcapStar Spectrum files available via SDSS API and web interface Planning 4 data releases in SDSS-IV: DR14: July 2017 (data through July 2016) DR15: July 2018 (data through July 2017 – first APOGEE-S) DR16: July 2019 (data through July 2018) DR17: Dec 2020 (all data)

  10. APOGEE software products • apogeereduce: IDL reduction routines (apred and apstar) • aspcap • speclib: management of spectral libraries, but not all input software (no stellar atmopsheres code, limited spectral synthesis code) • ferre: F95 code to interpolate in libraries, find best fit • idlwrap: IDL code to manage ASPCAP processing • apogeetarget: IDL code for targetting

  11. APOGEE pipeline processing • Software all installed and running on Utah servers • Software already in pipeline form (few lines per full reduction step to distribute and complete among multiple machines/processors) • Some need to improve distribution of knowledge and operation among team • Some external data/software required for ASPCAP operation • Generation of stellar atmospheres (Kurucz and/or MARCS) • Generation of synthetic spectra (ASSET, but considering MOOG and TURBOSPECTRUM)

More Related