1 / 27

IC59+40 Point Source Analysis Mike Baker, Juan A Aguilar, Jon Dumm, Chad Finley, Naoko Kurahashi,

IC59+40 Point Source Analysis Mike Baker, Juan A Aguilar, Jon Dumm, Chad Finley, Naoko Kurahashi, Teresa Montaruli 27 April 2011 http://wiki.icecube.wisc.edu/index.php/IC59+40_Point_Source_Analysis. Merge IC40 and IC59 PS samples for a common analysis, providing best sensitivity possible

marjorien
Download Presentation

IC59+40 Point Source Analysis Mike Baker, Juan A Aguilar, Jon Dumm, Chad Finley, Naoko Kurahashi,

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IC59+40 Point Source Analysis Mike Baker, Juan A Aguilar, Jon Dumm, Chad Finley, Naoko Kurahashi, Teresa Montaruli 27 April 2011 http://wiki.icecube.wisc.edu/index.php/IC59+40_Point_Source_Analysis

  2. Merge IC40 and IC59 PS samples for a common analysis, providing best sensitivity possible Reproduce all searches from IC40 by Madison PS group: Time-integrated all-sky scan, list, stacking, (galactic plane) Time-dependent triggered/untriggered flares Provide a data sample for other IC59 analyses Goals

  3. IC59 data: - May 20, 2009 to May 31, 2010 - 348 days of livetime out of 375 days running - 93% of data used in analysis - Kept only runs on official good run list - All runs with light in detector or with more than two dropped strings were removed. For comparison, IC40 had 375 days of livetime with 92% of data used.

  4. Intro to PS analysis * signal pdf * psf, energy dist * p-value from scrambling

  5. Two sets of cuts were tested: One set based off of simple cuts on reconstruction quality parameters, similar to IC40 One utilizes a Boosted Decision tree, using the data as the background sample. Upgoing region

  6. Boosted Decision Tree In order to improve signal efficiency at lower energies, we have trained BDTs with both an E-2 and an E-2.7 signal spectrum (bdtlow and bdthigh, respectively). The training is performed the same way in both cases, taking two trees with different variables and combining the results. We require bdtlow > 1.45 or bdthigh>1.4 . Trained E-2.7 Trained E-2

  7. Boosted Decision Tree -We find this gives an effective area (right, all upgoing zeniths) which is uniformly above that achieved from straight cuts. -We keep

  8. IceTop veto In the downgoing region, we find that showers with muons which pass through the detector are detected in IceTop Follow the InIce track to the surface, and calculate the time residual of the IceTop hits from a shower from the track. Count the number of hits in IceTop compatible with being from the shower.

  9. IceTop veto • Top is the distribution of the number of IT hits vs cosine Zenith for data. • Below is the signal, with an E^-2 weight. IceTop information from minBias events is used to simulate uncorrelated noise. • Two or more hits coincident with being from a shower veto the event. This retains 99% of signal.

  10. IceTop veto As with the IC40 PS analysis, we use an energy cut to select a constant number of events per unit solid angle in the downgoing region. The IT veto allows us to correspondingly loosen the energy cuts from those shown above left using no veto, to below left, where the multiplicity < 2 cut is used.

  11. Comparison of BDT to Straight Cuts

  12. BDT vs Straight Cuts We find that the BDT yields a better sensitivity than the straight cuts

  13. Dedicated signal simulation datasets are needed with varied ice properties and DOM sensitivities. These will be used to determine error on the signal efficiency. Systematic Errors

  14. Combining Datasets The challenge in combining datasets is how to split up the signal fraction. The spectrum is the same in all combined samples. Ratio of effective areas isn't constant, so the relative contribution varies depending on the spectral index. Plot is an example of relative of contributions of IC40 and IC59 at +16o dec versus spectrum.

  15. Combining Datasets The result of combining datasets gives us a roughly 50% improvement in the sensitivity compared to only using IC59 for an E-2 spectrum signal.

  16. Time-dependent analyses As in IC40, we propose to use lightcurves from Fermi to test for flaring behavior. The lightcurve is used to test for neutrino emission above a best-fit photon flux level. We propose to use the lightcurves from both IC40 and IC59 to test for flaring behavior. The blazar 3C 454.3 lightcurve is shown below. IC40 IC59 IC79

  17. We tested the discovery potential for different true levels of emission, and find a similar behavior in the combined sample as in only IC40. Objects with Fermi lightcurves: 235 objects with flares in both sets11 objects with flares in only IC597 objects with flares in only IC40 Lots of info and links here: http://wiki.icecube.wisc.edu/index.php/IC59_Time_Dependent_Analysis/mwl

  18. We only propose using the IC59 sample for the untriggered search, since the search finds discrete flares. The behavior at very short flares is similar to that seen with the IC40 sample. So the IC59 sample is still in the low-background regime. Untriggered flare search I NEED TO BE FIXED

  19. Unblinding Summary We would like to unblind a skymap and source list using the combined IC40+59 for presentation at the ICRC. This will use the IC40 PS sample and the IC59 BDT-selected sample. Other searches such as stacking catalogues, galactic plane search and an anisotropy will also be compiled. For time-dependent searches using multiwavelength and periodicity data we plan to use lightcurves from IC40+59. A list of interesting sources is available. For an untriggered flare search providing the most significant flare of the IC59 data taking period.

  20. backup

  21. Upgoing (straight) Cuts: http://wiki.icecube.wisc.edu/index.php/IC-59_PS_Event_Selection (From L3): require largest Topologically split track to be upgoing. MPE Rlogl < 8.3 MPE Paraboloid Sigma < 2.5 deg MPE Direct Length > 150 MPE Direct NCh > 5 Bayes Llh ratio > 30 At least one time- or geo-split reconstruction must succeed, and all split recos must have a Zenith > 80 deg

  22. Comparison of BDT to Straight Cuts The BDT keeps more events with worse angular resolution, so the PSF (top) and angular resolution (bottom) are 0.2 to 0.4 deg worse, depending on the energy of the events.

  23. Efficiency of the IT veto Below we have the fraction of events which pass the IT veto for data and signal after quality cuts (mrlogl<7.4, sigma<1.5deg) are applied. In the vertically downgoing region 1% of events are let through by the veto, compared to 99% of signal.

  24. Local CoordinateEffects We have the local coordinate functions for IC40 and IC59. The effect of the 'long end' vs the 'short end' from IC40 is mostly smoothed out. Effects from events reconstructed along lines of strings is still evident. IC40 IC59

  25. Data/MC comparisons at: http://icecube.wisc.edu/~aguilar/IC59_BDT_Final/and: http://www.icecube.wisc.edu/~aguilar/IC59_StraightCut_Final/

More Related