1 / 19

Artificial Intelligence Parallel Sessions II, III & IV

Artificial Intelligence Parallel Sessions II, III & IV. Harrison B. Prosper Rapporteur Florida State University. Outline. Introduction Highlights Comments and Conclusions. Introduction. Sessions II, III, IV 9 NN talks Selecting WW (DELPHI) Searching for single top (D Ø)

htoland
Download Presentation

Artificial Intelligence Parallel Sessions II, III & IV

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial IntelligenceParallel Sessions II, III & IV Harrison B. Prosper Rapporteur Florida State University ACAT 2000, Fermilab, Harrison B. Prosper

  2. Outline • Introduction • Highlights • Comments and Conclusions ACAT 2000, Fermilab, Harrison B. Prosper

  3. Introduction • Sessions II, III, IV • 9 NN talks • Selecting WW (DELPHI) • Searching for single top (DØ) • Measuring the top mass • Searching for the Higgs • Calorimeter energy estimation (ATLAS) • Electron/jet discrimination (ATLAS) • Particle identification (DØ) • Tracking in vertex detector (HERA-B) • Vertex finding (ZEUS) • 4 other talks: Job scheduling, confidence bounds, rule induction, kernel-based discrimination ACAT 2000, Fermilab, Harrison B. Prosper

  4. Selecting WW events in DELPHI(Müller) • Goal • To measure WW  4parton cross-section • Signal: e+e-W+W- 4 jets • Backgrounds: e+e- Z0/g or Z0 Z0 4 jets • Method • (13,7,1) feed-forward neural network • Trained using stochastic minimization (back-propagation) • Results • Selection efficiency times purity increases from 63.5% (using conventional 1-d cuts) to 69% (NN) ACAT 2000, Fermilab, Harrison B. Prosper

  5. Search for Single Top (DØ)(Lev Dudko) • Goal • To extract a signal starting with S/B ratio of ~ 1:500 • 5 backgrounds • 2 processes • 2 decay modes • Method • Construct a tree of networks trained using the program MLPfit 1.4 ACAT 2000, Fermilab, Harrison B. Prosper

  6. Search for Single Top (DØ) (cont) S/B ~ 1/23 after NN cuts Yellow band: model uncertainty Triangles: observed distribution using independent data-set not used in training ACAT 2000, Fermilab, Harrison B. Prosper

  7. b t P P b t Measuring the Top Mass(Suman B. Beri) • Goal • Reduce uncertainty in the top quark mass to << 5 GeV • Run II promises ~ 100 x more data relative to Run I • Smaller statistical errors • Therefore, need smaller systematic errors also. • Studies • Explore use of robust variables in NNs ACAT 2000, Fermilab, Harrison B. Prosper

  8. Measuring the Top Mass (cont) End-point occurs at the top mass if b quark and the lepton are correctly paired • Find variables that are less sensitive to systematic effects such as the jet energy scale. • For example: • x1 = f(e,b1) • x2 = f(m,b2) • x3 = f(e,b2) • x4 = f(m,b1) where • Combine into a single mass-dependent function: • Y = network(x1,x2 ,x3,x4) ACAT 2000, Fermilab, Harrison B. Prosper

  9. 160 GeV 169 ± 23 160 GeV 169 ± 23 170 GeV 175 ± 25 170 GeV 175 ± 25 y (GeV) y (GeV) GeV y (GeV) 180 GeV 181 ± 24 180 GeV 181 ± 24 190 GeV 186 ± 25 190 GeV 186 ± 25 Measuring the Top Mass (cont, 2) • Restriction to finite range [100,250] GeV causes distortions at low and high mass • Need way to correct for distortions • One idea: use NN output distributions • P(y|(mt) • in Bayes’ Theorem • and take mass estimate to be • arg max[P(m|y)] ACAT 2000, Fermilab, Harrison B. Prosper

  10. Finding the Higgs at the Tevatron (Silvia Tentindo-Repond) • Studies in the Run II Susy/Higgs Workshop have shown that NN methods can effect a significant reduction (~ 2) in the required integrated luminosity to discover a Higgs boson with mass < 130 GeV. • NN show great promise for channel-independent b-tagging • Preliminary results suggest they may be useful in improving the di-jet invariant mass • Studies are underway to test ideas about channel-dependent b-tagging and to systematically search for better event variables for use with NNs and other multivariate methods. ACAT 2000, Fermilab, Harrison B. Prosper

  11. Calorimeter Energy Estimation in ATLAS(J. Seixas) • Goal • To reduce the (10.6%) non-linearity in calorimeter response while maintaining acceptable energy resolution (sE ~ 50%E) • Construct a network-based energy estimation function using test-beam data. • Method • Training done in two stages: • In first stage NN target outputs set to beam energy • In second stage target outputs set to linear sum of calorimeter cells. • Results • Non-linearity < 2.8% and (sE ~ 54%E) ACAT 2000, Fermilab, Harrison B. Prosper

  12. Electron/Jet Discrimination in ATLAS(J. Seixas) • Goal • To discriminate between electrons and jets • Method • Exploit segmentation of calorimeter and pre-process data to reduce dimensionality from 496 cells. Extract the most important features from each segment by a principal component analysis (PCA) • The “n” PCA components form the input to a (n,33,1) network • Results • With 33 components (<< 496), obtain 99% electron efficiency and < 1% false electron efficiency. ACAT 2000, Fermilab, Harrison B. Prosper

  13. Particle identification at DØ(Dhiman Chakraborty) • Goal • To improve significantly on the electron and tau identification with respect to that available during Run I, which was based on Fisher discriminants constructed from 41x41 matrices. • Method • Train a (7,16,1) network with features that discriminate between electrons and jets. • Results • Factor ~ 10 improvement in S/B ratio for electrons and ~2 for taus. ACAT 2000, Fermilab, Harrison B. Prosper

  14. Vertex Reconstruction in ZEUS(Erez Etzion) • Goal Keep these events Reject these! ACAT 2000, Fermilab, Harrison B. Prosper

  15. Vertex Reconstruction in ZEUS (cont) Hierarchical clustering • NN Layer 1 (large retina with ~ 100,000 neurons) • receptive fields of neurons give (x,y) position of hits • NN Layer 2 (segment finder) • Gives segment orientation. • NN Layer 3 (arc finder) • Gives arc (k,q,ring) • NN Final layer (track finder) ACAT 2000, Fermilab, Harrison B. Prosper

  16. Rule Induction(Nikita Stepanov) • Goal • To create a system that can report how it has arrived at its decision. The output of such a system is basically a decision tree. • Method • Manipulate (and evolve) symbolic structures • Test case (MC study) • Improving b-tagging in pp  ttH  4b + lepton events • Results • Tagging efficiency per jet ~ 62% ACAT 2000, Fermilab, Harrison B. Prosper

  17. Looking for Instantons at Hera(B. Koblitz) • Goal • Separate instanton events from deeply inelastic scattering events. Problem: S/B starts at about ~ 1/100. Need fast way to experiment with many sets of variables. • Method • Classify each point (event) using Kernel density estimation. Key idea is to use a t ~ nlog(n) range searching algorithm to find points in the neighborhood of each event and to classify the event according to the value of the discriminant D. ACAT 2000, Fermilab, Harrison B. Prosper

  18. Looking for Instantons at Hera (cont) • Results • At least as good as NN, but growing tree is much faster than training a network with comparable performance • 20 mins vs 4 hours on a Linux system! ACAT 2000, Fermilab, Harrison B. Prosper

  19. Comments and Conclusions • All classification methods are approximations to the same problem: the calculation of class posterior probabilities: Pr(class|Data) • Experience suggests that no method is uniformly the best over the set of all possible problems. Therefore, one should embrace them all. • That being said, however, it is useful to have a default method and it seems that neural networks are emerging as the method of choice for classification and functional approximation. • It has been amply demonstrated at this workshop, in the many excellent talks, that neural networks work very well in practice. But it is necessary to exercise a fair measure of non-artificial intelligence to render that statement true! ACAT 2000, Fermilab, Harrison B. Prosper

More Related