1 / 23

Continuous simulation of Beyond-Standard-Model processes with multiple parameters

Continuous simulation of Beyond-Standard-Model processes with multiple parameters. Jiahang Zhong (University of Oxford * ) Shih-Chang Lee (Academia Sinica) ACAT 2011, 5-9 September, London. * Was in Academia Sinica and Nanjing University. Motivation.

nenet
Download Presentation

Continuous simulation of Beyond-Standard-Model processes with multiple parameters

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Continuous simulation of Beyond-Standard-Model processes with multiple parameters Jiahang Zhong (University of Oxford*) Shih-Chang Lee (Academia Sinica)ACAT 2011, 5-9 September, London * Was in Academia Sinica and Nanjing University

  2. Motivation • Many Beyond Standard Model (BSM) processes are defined by more than one free parameters • Masses of hypothetical particles • Coupling constants • … • Grid Scan • Scan the parameter spacewith grid points • Simulate a sample of events on each point Var2 Var1 ACAT 2011, 5-9 September, London

  3. Motivation • The difficulties of the grid-scan approach: • Curse of dimensionality • Npoints~Nd • Hard to go beyond 2D • Costly for finer granularity Var2 Var1 ACAT 2011, 5-9 September, London

  4. Motivation • The difficulties of the grid-scan approach: • Curse of dimensionality • Npoints~Nd • Hard to go beyond 2D • Costly for finer granularity • Large statistics required • Samples at different points are treated independently • Considerable statistics neededwithin each sample Var2 Pass Fail ~10k evts Var1 ACAT 2011, 5-9 September, London

  5. Motivation • The difficulties of the grid-scan approach: • Curse of dimensionality • Npoints~Nd • Hard to go beyond 2D • Costly for finer granularity • Large statistics required • Samples at different points are treated independently • Considerable statistics neededwithin each sample • Discreteness • Considerable space between points • Smoothing/interpolation needed • Consequent systematic uncertainties Var2 ~TeV ~100GeV Var1 ACAT 2011, 5-9 September, London

  6. Motivation • Grid-scan: • Curse of dimensionality • Large statistics needed • Discreteness • The aim of Continuous MC • Competent for multivariate parameter space • Less events to be simulated • Continuous estimation of signal yield over the parameter space ACAT 2011, 5-9 September, London

  7. Motivation • The usage of multivariate BSM simulation is to estimate signal yields over the parameter space. • Yields: N(x)=L* σ(x) * ε(x) • L: Luminosity.Irrelevant to x (the free parameters) • σ: Cross section, branching ratio. Easy to calculate with event generators • ε: Detector acceptance, offline efficiencyNeed large amount and expensive detector simulation • Therefore, our method is focused on easing the estimation of ε ACAT 2011, 5-9 September, London

  8. The procedure Event generation Grid Scan Continuous MC Var2 Var2 O(10d) space points O(100k) space points O(10k) events/point O(1) events/point Var1 Var1 ACAT 2011, 5-9 September, London

  9. The procedure • Bayesian Neural Network (BNN) is used to fit the efficiency ε • Desirable features of NN fitting • Non-parametric modeling • Smooth over the parameter space • Unbinned fitting • Suffer less from dimensionality • Correlation between the variables Unbinned fitting vs. BinnedHistogram ACAT 2011, 5-9 September, London

  10. The procedure • Bayesian implementations of NN further provide • Automatic complexity control of NN topology during training • Probabilistic output • Uncertainty estimation of the output • Uncertainty of the output estimated based on the p.d.f. of the NN parameters. • Statistical fluctuation of the training sample • Choice of NN topology • Impact of fitting goodness at certain space point x ACAT 2011, 5-9 September, London

  11. Demo • Production of right-handed W boson and Majorana neutrino • Di-lepton final state • 2 leptons (e,μ) • pT>20GeV, |eta|<2.5 • cone20/pT<0.1 • Two free parameters • WR mass [500GeV,1500GeV] • NR mass [0, M(WR)] • Affect both the cross-section and efficiency

  12. Demo • Continuous Simulation • Generated 100k events, each with random { M(WR), M(NR) } • Put each event through the selection criteria, and assign target value 1/0 if it pass/fail • Feed all events to a BNN, with { M(WR), M(NR) } as the input variables • Use the trained BNN as a function to provide ε±σε • Reference grid-scan • A grid with 100GeV step in M(WR) and 50GeV step in M(NR) (171 samples in total) • Sufficient statistics in each sample to achieve precise reference values ACAT 2011, 5-9 September, London

  13. Demo The BNN fitted efficiency Reference from grid-scan ACAT 2011, 5-9 September, London

  14. Demo The difference between fitted values and reference values ACAT 2011, 5-9 September, London

  15. Demo Uncertainty estimated by the BNN. ACAT 2011, 5-9 September, London

  16. Demo The real deviations vs. estimated uncertainties (Nσ) ACAT 2011, 5-9 September, London

  17. Summary • New approach to simulate multivariate BSM processes • More space points, less events • Use BNN fitting to obtain smooth yield estimation • Performance tested by • The deviation between BNN and reference values • This deviation vs. BNN uncertainty • Limitation: the assumption of smooth distribution • Not sensitive to local abrupt changes • Less performance across physics boundary. ACAT 2011, 5-9 September, London

  18. 完Thank you! ACAT 2011, 5-9 September, London

  19. Backup More detailed documentation of this methodhttp://arxiv.org/abs/1107.0166 The Bayesian Neural Network in TMVA/ROOThttp://www.sciencedirect.com/science/article/pii/S0010465511002682 Links ACAT 2011, 5-9 September, London

  20. Backup How does BNN fitting work • A black-box of discriminatorA white-box of non-parametric fitting tool • A multivariate function y(x) • Generic function approximator (analog to polynomial in 1D) • Training unbinned MLE fitting y: NN output, a probability, [0,1]t: Target value, 1=pass, 0=fail ACAT 2011, 5-9 September, London

  21. Backup: Bayesian implementation of NN(I) • Probability fitting • Unbinned fitting • Full usage of every event • Extrapolation/Interpolation • Fit y as probability function • Bernoulli likelihood Histogram BNN ACAT 2011, 5-9 September, London

  22. Backup: Bayesian implementation of NN (II) • Uncertainty estimation • Training: • Most probable value wMP • P(w|D)Probability of other w • Prediction • Probability • Uncertainty of y • Avoid excessive extrapolation (non-trivial for multivariate analysis) Histogram BNN ACAT 2011, 5-9 September, London

  23. Backup: Bayesian implementation of NN (III) Early stop • Regulator • Overtraining is possible due to excessive complexity of NN • Early stop • Use half input sample as monitor • Manual decision of when to stop excessive fitting • Regulator • Prior knowledge that “simpler” model is preferred • Adaptive during training • Save the monitor sample!!! Regulator ACAT 2011, 5-9 September, London

More Related