1 / 33

Lucio BAGGIO for the VIRGO+bars Collaboration

First joint search of burst gravitational waves by the A URIGA -E XPLORER -N AUTILUS -V IRGO collaboration. Lucio BAGGIO for the VIRGO+bars Collaboration. Outline. VIRGO+bars Network: AURIGA, EXPLORER, NAUTILUS and VIRGO

shiloh
Download Presentation

Lucio BAGGIO for the VIRGO+bars Collaboration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. First joint search of burst gravitational waves by the AURIGA-EXPLORER-NAUTILUS-VIRGO collaboration Lucio BAGGIO for the VIRGO+bars Collaboration Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  2. Outline • VIRGO+bars Network:AURIGA, EXPLORER, NAUTILUS and VIRGO • Main methodology: coincidence search on trigger lists provided by each detector, with expected accidental coincidences computed by time shifts. • Goal: assess interpreted confidence intervals on the flux of gravitational waves signals coming from the galactic center (GC) and taken from the template class of damped sinusoids (DS): • Efficiency of detection comes from software injections (MDC). The injected population has amplitudes derived from the assumption of elliptical polarization from randomly oriented rotation axis of the source. • Optimization of thresholds: for each template and each given target amplitude, the best compromise between efficiency and FAR is searched, using variable threshold for each detector with ½ hour bins. • Blind analysis: in order not to bias results by feedbacks on methods from looking at results, a “secret” time offset has been added to detector times. Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  3. The VIRGO-bars network 24 hours of data taking during C7, starting from GPS time 810774700 (UTC 14 Sep 2005 - 23:11 27s) VIRGO AURIGA = NAUTILUS = EXPLORER Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  4. Software injections details • Damped Sinusoids: • 11 waveforms to investigate several damping times and central frequencies: • For each template, we generated N=8640 signals (one each 10s), with uniformly random time jitter of +/- 0.5s. • Polarization is elliptic, distributed as for signals generated by rotating systems at GC: random polarization angle  in [0, 2], and random inclination angle  such that cos  is distributed uniformly in [−1,1]. Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  5. Observables provided by each detectors • AURIGA: WaveBursts (S. Klimenko et al, LIGO-T050222-00-Z) was succesfully adapted to AURIGA data. The cluster S/N (close to the optimal) was used as an indicator of the signal magnitude. • NAUTILUS and EXPLORER: a single linear Wiener-Kolmogorov filter matched to the impulse response is applied to the output data. The impulseS/N was used as an indicator of the signal magnitude. • VIRGO: PowerFilter is the chosen trigger generator. The normalized logarithmic power was used as an indicator of the signal magnitude. Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  6. Exchanged data: triggers+MDC @ 10-19 Hz-1/2 AURIGA N=1413 EXPLORER N=5614 VIRGO N=24241 NAUTILUS N=8628 Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  7. Assessing the background of accidentals • To assess the significance of rates, we need an estimate of the rate of accidentals. • Ideally one would like to have events at each detector distributed as independent Poisson processes. The auto-correlogram of the events at each detector should be flat. • Instead, because of non-gaussianity, oscillations occur, for instance in Virgo which is under commissioning. • However, the cross-correlogram is flat! So the coincidences can be regarded as a Poisson process. Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  8. A better view in the frequency domain Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  9. Optimization strategies The real job on false alarm rate reduction is performed during network analysis. The reduction is provided by requiring the event magnitude to exceed a higher threshold than the exchanged minimum. The efficiency of detection will be also affected. The trade-off between background and efficiency depends on our goal: • Detect a single GW event with high confidence, or low false alarm probability: • expected background counts << 1 • In this case we shall measure the efficiency (which may turn out very low) which is determined by the chosen confidence. • Define the best exclusion region (upper limit on rate vs amplitude): • the ratio efficiency / background fluctuationsis maximized • It is understood that we are able to estimate the mean background counts, and subtract them to the total. That is why we are limited only by the background fluctuations – i.e. ~sqrt(background). Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  10. Optimization in practice (1) The time axis is subdivided in ½ hour long bins to account for variable efficiency (and possibly variable background rate). The optimization proceeds by incremental steps, each time affecting the threshold on the event magnitude in one of the two detectors at one particular time bin. How better will be the total background with the new threshold? How worse the total efficiency? We need a benchmark in order to rank the optimization steps and to decide which is the next better move. The benchmark is defined as the ratio (total efficiency)/sqrt(total background) • If the target is low accidental coincidence probability, we stop the process only when the expected background has reached the desired level. • If the target is to have better exclusion regions, we stop when the ratio efficiency/sqrt(background) starts to decrease Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  11. time lags injections accidental optimization of thresholds unbiased characterization background & efficiency Optimization in practice (2) detector data 1 2 Only half of the time-delayed accidental coincidences and half of the injected signals are used to rank the bins and the thresholds. In fact, due to limited statistics, the thresholds “overfit” the input data fluctuations. coincidences The other halves of the data are then used to give an unbiased estimate of background and efficiency to use for confidence interval calculation. Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  12. Example: DS(914 Hz, 1ms, 10-19 Hz-1/2) AURIGA VIRGO Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  13. Statistical Analysis (1) The confidence intervals were set according to the confidence belt already used by IGEC1 (see L. Baggio and G.A. Prodi, “Setting confidence intervals incoincidence search analysis" in “Statistical problems in particle physics, astrophysics and cosmology”, R.Mount, L.Lyonsand and R.Reitmeyer editors, Stanford (2003) 238): The resulting confidence intervals are the supports which maximize the likelyhood integral, and they are chosen in order to give (conservately) a minimum frequentist converage for all possible values of the source parameter. but here we made an important modification: an additional null hypothesis test modifies the coverage at low signal rates (or no signal at all). We may loosely say that the confidence intervals are “more confident” when including the null hypothesis than when bounding the expected value of GW number. Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  14. P{false alarm} < 5% P{false alarm} < 0.1% upper limit upper limit affirmative claim affirmative claim Modifying confidence belts background Nb=7 P{wrong estimate}< 5 % GW event number coincidence counts false assessment probability Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  15. Statistical Analysis (2) • The coincidence search and optimization procedure were performed for different populations (f0,  , hrss) and many couples of detectors, which accounts for multiple tests performed (~100). Eventually this large trial factor increases the false claim probability • The effective overall confidence is defined as the probability of not having a single false claim in any of the performed tests. It is clearly linked to the confidence of the single trial. Knowing this relation we can compensate with a higher confidence on the single trial and to achieve the desired global confidence. • This relation may be empirically estimated by measuring the frequency of false claims in the time-delayed configurations. In other words, for each time lag we simulate the confidence interval obtained across all optimized configurations, and we check for false rejection of the null hypothesis. Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  16. Waveform  Amplitude zDetectors C1-C2 Waveform  Amplitude yDetectors B1-B2 Waveform  Amplitude xDetectors A1-A2 coincidences coincidences coincidences lag 3 lag 1 lag 2 lag 3 lag 5 lag 1 lag 2 lag 4 lag 3 lag 4 lag 4 lag 5 lag 5 lag 1 lag 2 background & efficiency background & efficiency background & efficiency test test test test test test test test test test test test test test test test test test test test interval interval interval zero-lag coincidences zero-lag coincidences zero-lag coincidences global confidence Trial factor in practice Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  17. A lower trial factor comes by analyzing only on the best couples of detectors for each template/amplitude. Trial factor in 2-fold search In two-fold coincidence search, it was possible to assess an empirical confidence of 98-99% on the results using 400 time delayed configurations Global confidence Single trial confidence Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  18. Statistical Analysis (4) • Disclaimer: rejecting the null hypothesis implies to claim excess correlation in the observatory. This can be due to: • GW signals. • cross-correlated noise (not taken into account in the background measuring procedure with time lags); • bad chance (statistically improbable but not impossible) Excluding cross-correlations is not an easy task: when assessing the results this duality in the physical interpretation should always be kept in the mind of the readers. The probability of accidental claim depends in the first place on the chosen threshold of acceptable p-level of the statistic test. The p-levels themselves are affected by measurement errors (background coincidence counts) and systematics (edge effects, ergodic approximation) but normally we can properly account for them. The bottom line: if you find coincidences in excess, what are you going to blaim first: glichiness of data and poor sensitivity, or rather that risky 90% confidence level threshold? Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  19. Results for the 2-fold coincidence searches (1) • Goal: to obtain the better exclusion regions. • The level of residual accidental background being relatively high (~0.1/day), the detection of a single coincidence does not lead to claim of excess correlation. • It was possible to assess an empirical confidence of 98-99% on the results using 400 time delayed configurations No excess of coincidences was found. The null hypothesis is confirmed at 99%. Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  20. Results for the 2-fold coincidence searches (2) Upper Limits at 95% coverage Preliminary hrss=10-20 Hz-1/2 would correspond to ~ 10-3 Mo radiated at 10kpc Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  21. 3-fold coincidence searches • Goal: to be able to issue a claim at 99.5% confidence on a single observed triple. • The backgound for some configurations is low enough to reach this confidence. 400x400 time lags allow to estimate such a low false alarm probability. • In order to limit the trial factor, for each waveform only a small subset of MDC amplitudes (e.g. 10-19, 5 10-18 and 10-18 Hz-1/2) will be tried. The zero-lag will be then analyzed with the optimization for the lower signal amplitude which still allows at least a level of efficiency of 40%. Configurations of detectors/template which do not reach such minimal level for any of the chosen amplitudes will be discarded. • The analysis and checks are in progress, results to appear soon. Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  22. Summary and final remarks (1) In order to extract the maximum of information from the collected data, we defined optimized thresholds which took into account the characteristics of the tested population (direction, amplitudes, polarization...) via the efficiency of detection. The injection of many waveforms and amplitudes multiplies the computation time by order of magnitudes! However, this is not intrinsic of the method, which only requires an hint on the efficiency variability (it could be provided by an empirical formula using the noise characterization and modelization of injected signals). While only the magnitude at the output of the event search algorithm was used, in principle any test statistic provided with or derived from the coincidences as a function of time may be included in the optimization process. No attempt was done to regularize the final output of the threshold optimization. The implemented optimization algorithm is very primitive and the correspondence between microscopic states (threshold time series) and macroscopic observables (efficiency, background) has not been systematically investigated. Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  23. Summary and final remarks (2) The issue which prevents the systematic use of the ad hoc optimization is the trial factor. The overall false claim probability can be controlled, but at the price of reduced sensitivity. Eventually, the affordable number of independent optimizations has to be limited to the most promising cases, based on a preliminary survey of the expected backgrounds and efficiency. Gabriele Vedovato (AURIGA) is implementing a different analysis scheme based on WaveBurst in association with cross-correlation tests. This semi-coherent all-sky network analysis is being preliminary tested on AURIGA-VIRGO data and is giving promising results. A Virgo-note was produced to discuss the methodology: VIR-NOT-FIR-1390-328 Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  24. EXTRA SLIDES Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  25. Background estimate +/- 7 min Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  26. Time coincidence • Our ETGs are not “matched” to DS • Time errors are dominated by systematic biases. • The narrower the bandwidth, the greater the signal distortion • Example : AURIGA – VIRGO coincidences. The double peak is due to the multimodal time error by VIRGO • The coincidence “window”, Tw = 40 ms f0=914 Hz  =1ms f0=866 Hz  =10ms f0=930 Hz  =30ms Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  27. Efficiency DS: f0=930 Hz tau=30ms • Single detector efficiencies • For VIRGO, ~ 7 hrs out of 24 have been excluded by epoch vetoes • => Asymptotic 70% DS: f0=866 Hz tau=10ms DS: f0=914 Hz tau=1ms Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  28. optimal alignment Software injections details (1) We chose a family of DS with several damping times and central frequency combination in order to span evenly our parameter space. For all the classes, we set hrss = 10−23Hz −1/2 (this can be changed simply rescaling all the signals). • For each class, we generated a series of injection times (signal arrival at the center of the Earth), spaced by 10s + uniform random jitter of +/- 0.5s. • For each class and for each injection time we generated a random polarization angle  in [0, 2], and a random inclination angle  such that cos  is distributed uniformly in [−1,1]; Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  29. Signals and astrophysical motivation • fgw and t are frequency and damping times • hrss is the scale factor (we will define it precisely later) • yand i are geometrical factors (polarization and “source plane” inclination Such signals could be produced by a ringdown of a system excited in a l=m=2 mode BH-BH ring-down. Andersson N. and Kokkotas K., Mon. Not. Roy. Astron. Soc.299 (1998) Kokkotas K.D. and Schmidt B.G., http://www.livingreviews.org/lrr-1999-2 (1999) f-mode of neutron stars. In this case the f-mode could produce a wave with variable frequency and damping time; to keep this into account we did not use matched filtering. Ferrari V. et al., Mon. Not. Roy. Astron. Soc.342 (2003) 629 Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  30. Astrophysical motivations: The chosen signals can be produced by: BH-BH ring-down. In this case the energy release to make a detection realistic should be about 10-3 - 10-4 Mo for a galactic event. Andersson N. and Kokkotas K., Mon. Not. Roy. Astron. Soc.299 (1998) Kokkotas K.D. and Schmidt B.G., http://www.livingreviews.org/lrr-1999-2 (1999) f-mode of neutron stars. Here too the energy release must be high. Moreover, the f-mode could produce a wave with variable frequency and damping time, which anyway should sweep inside the observed frequency band. Ferrari V. et al., Mon. Not. Roy. Astron. Soc.342 (2003) 629 The wave polarisation should be linear for SN explosions, but elliptic for BHBH coalescenses (V. Ferrari, private communication). Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  31. Which physical parameters? • Take just the example of Quasi Normal Modes of Black Holes, and assume that an l=m=2 mode dominates the signal. • Mass and ratio j = J/M2 are correlated with frequency and damping time. • So, we are looking also at t values which are incompatible with these modes Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  32. Interpretation of the found limits We go back to the signal model The hrss is just a spectral scale of the signal The definition of the energy flux distribution over angle and frequency With the signal model, the total radiated energy is easily computed as With the signal model, the total radiated energy is easily computed as An hrss=10-20 Hz-1/2 would correspond to ~ 10-3 Mo radiated at 10kpc Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

  33. For each outcome x one should be able to determine a confidence interval Ix For each possible m, the measures which lead to a confidence interval consistent with the true value have probability C(m), i.e. 1-C(m) is thefalse dismissal probability Confidence Belt & Coverage physical unknown confidence interval coverage experimental data Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006

More Related