1 / 0

The Magnificent EMM

The Magnificent EMM. Margaret H. Dunham Michael Hahsler , Mallik Kotamarti, Charlie Isaksson CSE Department Southern Methodist University Dallas, Texas 75275 lyle.smu.edu/~ mhd mhd@lyle.smu.edu

gilles
Download Presentation

The Magnificent EMM

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Magnificent EMM Margaret H. Dunham Michael Hahsler, Mallik Kotamarti, Charlie Isaksson CSE Department Southern Methodist University Dallas, Texas 75275 lyle.smu.edu/~mhd mhd@lyle.smu.edu This material is based upon work supported by the National Science Foundation under Grant No IIS-0948893.
  2. Objectives/Outline EMM Overview EMM + Stream Clustering EMM + Bioinformatics
  3. Objectives/Outline EMM Overview Why What How EMM + Stream Clustering EMM + Bioinformatics
  4. Lots of Questions Why don’t data miners practice what they preach? Why is training usually viewed as a one time thing? Why do we usually ignore the temporal aspect of data streams? Continuous Learning Interleave learning & application Add time to online clustering
  5. MM A first order Markov Chain is a finite or countably infinite sequence of events {E1, E2, … } over discrete time points, where Pij = P(Ej | Ei), and at any time the future behavior of the process is based solely on the current state A Markov Model (MM) is a graph with m vertices or states, S, and directed arcs, A, such that: S ={N1,N2, …, Nm}, and A = {Lij | i 1, 2, …, m, j 1, 2, …, m} and Each arc, Lij = <Ni,Nj> is labeled with a transition probability Pij = P(Nj | Ni).
  6. Problem with Markov Chains The required structure of the MC may not be certain at the model construction time. As the real world being modeled by the MC changes, so should the structure of the MC. Not scalable – grows linearly as number of events. Our solution: Extensible Markov Model (EMM) Cluster real world events Allow Markov chain to grow and shrink dynamically
  7. EMM (Extensible Markov Model) Time Varying Discrete First Order Markov Model Continuously evolves Nodes are clusters of real world states. Learning continues during prediction phase. Learning: Transition probabilities between nodes Node labels (centroid of cluster) Nodes are added and removed as data arrives
  8. EMM Definition Extensible Markov Model (EMM): at any time t, EMM consists of an MC with designated current node, Nn, and algorithms to modify it, where algorithms include: EMMCluster, which defines a technique for matching between input data at time t + 1 and existing states in the MC at time t. EMMIncrement algorithm, which updates MC at time t + 1 given the MC at time t and clustering measure result at time t + 1. EMMDecrement algorithm,which removes nodes from the EMM when needed.
  9. EMM Cluster Nearest Neighbor If none “close” create new node Labeling of cluster is centroid of members in cluster O(n) Here n is the number of states
  10. 2/3 1/2 N3 2/3 N1 2/3 1/2 N3 1/3 1/1 N2 N1 N1 1/2 2/3 1/3 1/1 N2 1/3 N2 N1 1/3 N2 N3 1/1 1 N1 1/1 2/2 1/1 N1 EMM Increment <18,10,3,3,1,0,0> <17,10,2,3,1,0,0> <16,9,2,3,1,0,0> <14,8,2,3,1,0,0> <14,8,2,3,0,0,0> <18,10,3,3,1,1,0.>
  11. 1/3 1/3 1/3 1/6 1/6 N1 N1 N3 N3 1/3 2/2 1/3 1/6 N2 1/3 1/2 N6 N6 N5 N5 EMMDecrement Delete N2
  12. EMM Advantages Dynamic Adaptable Use of clustering Learns rare event Scalable: Growth of EMM is not linear on size of data. Hierarchical feature of EMM Creation/evaluation quasi-real time Distributed / Hierarchical extensions
  13. EMM Sublinear Growth Servent Data
  14. Growth Rate Automobile Traffic Minnesota Traffic Data
  15. EMM River Prediction
  16. Determining Rare Event Occurrence Frequency (OFi) of an EMM state Si is normalized count of state: Normalized Transition Probability (NTPmn), from one state, Sm, to another, Sn, is a normalized transition Count:
  17. EMM Rare Event Detection Ozone Data, UCI ML, Jaccardsimilarity, 2536 instances, 73 attributes, 73 ozone days Intrusion Data, Train DARPA 1999, Test DARPA 2000,
  18. Objectives/Outline EMM Overview EMM + Stream Clustering Handle evolving clusters Incorporate time in clustering EMM + Bioinformatics
  19. Stream Data A growing number of applications generate streams of data. Computer network monitoring data Call detail records in telecommunications Highway transportation traffic data Online web purchase log records Sensor network data Stock exchange, transactions in retail chains, ATM operations in banks, credit card transactions. Clustering techniques play a key role in modeling and analyzing this data.
  20. Stream Data Format Events arriving in a stream At any time, t, we can view the state of the problem as represented by a vector of n numeric values: Vt = <S1t, S2t, ..., Snt> Time
  21. Traditional Clustering
  22. TRAC-DS (Temporal Relationship Among Clusters for Data Streams)
  23. Motivation Temporal Ordering is a major feature of stream data. Many stream applications depend on this ordering Prediction of future values Anomaly (rare event) detection Concept drift
  24. Stream Clustering Requirements Dynamic updating of the clusters Completely online Identify outliers Identify concept drifts Barbara [2]: compactness fast incremental processing
  25. Data Stream Clustering At each point in time a data stream clustering ζ is a partitioning of D', the data seen thus far. Instead of the whole partitions C1, C2,..., Ck only synopses Cc1,Cc2,...,Cck are available and k is allowed to change over time. The summaries Cci with i =1, 2,...,k typically contain information about the size, distribution and location of the data points in Ci.
  26. TRAC-DS NOTE TRAC-DS is not: Another stream clustering algorithm TRAC-DS is: A new way of looking at clustering Built on top of an existing clustering algorithm TRAC-DS may be used with any stream clustering algorithm
  27. TRAC-DS Overview
  28. TRAC-DS Definition Given a data stream clustering ζ, a temporal relationship among clusters (TRAC-DS) overlays a data stream clustering ζ with a EMM M, in such a way that the following are satisfied: (1) There is a one-to-one correspondence between the clusters in ζ and the states S in M. (2) A transition aij in the EMM M represents the probability that given a data point in cluster i, the next data point in the data stream will belong to cluster j with i; j = 1; 2; : : : ; k. (3) The EMM M is created online together with the data stream clustering
  29. Stream Clustering Operations * qassign point(ζ,x): Assigns the new data point x to an existing cluster. qnew cluster(ζ,x): Create a new cluster. qremove cluster(ζ,x): Removes a cluster. Here x is the cluster, i, to be removed. In this case the associated summary Cci is removed from ζ and k is decremented by one. qmerge clusters(ζ,x): Merges two clusters. qfade clusters(ζ,x): Fades the cluster structure. qsplit clusters(ζ,x): Splits a cluster. * Inspired by MONIC [13]
  30. TRAC-DS Operations rassign point(M,sc,y): Assigns the new data point to the state representing an existing cluster rnew cluster(M,sc,y): Create a state for a new cluster. rremove cluster(M,sc,y): Removes state. rmerge clusters(M,sc,y): Merges two states. rfade clusters(M,sc,y): Fades the transition probabilities using an exponential decay f(t)=2−λt rsplit clusters(M,sc,y): Splits states. Y clustering operations.
  31. TRAC-DS Example
  32. Objectives/Outline EMM Overview EMM + Stream Clustering EMM + Bioinformatics Background Preprocessing Classification Differentiation
  33. DNA Basic building blocks of organisms Located in nucleus of cells Composed of 4 nucleotides Two strands bound together http://www.visionlearning.com/library/module_viewer.php?mid=63
  34. DNA transcription RNA translation Protein Central Dogma: DNA -> RNA -> Protein CCTGAGCCAACTATTGATGAA CCUGAGCCAACUAUUGAUGAA Amino Acid www.bioalgorithms.info; chapter 6; Gene Prediction
  35. RNA Ribonucleic Acid Contains A,C,G but U (Uracil) instead of T Single Stranded May fold back on itself Needed to create proteins Move around cells – can act like a messenger mRNA – moves out of nucleus to other parts of cell
  36. The Magical 16s Ribosomal RNA (rRNA) is at the heart of the protein creation process 16S rRNA About 1542 nucleotides in length In all living organisms Important in the classification of organisms into phyla and class PROBLEM: An organism may actually contain many different copies of 16S, each slightly different. OUR WORK: Can we use EMM to quantify this diversity? Can we use it to classify different species of the same genus?
  37. acgtgcacgtaactgattccggaaccaaatgtgcccacgtcga Moving Window Using EMM with RNA Data A C G T Pos 0-8 2 3 3 1 Pos 1-9 1 3 3 2 … Pos 34-42 2 4 2 1 Construct EMM with nodes representing clusters of count vectors
  38. EMM for Classification
  39. TRAC-DS and Bioinformatics Efficient Alignment free sequence analysis Clustering reduces size of model Flexible Any sequence Applicability to Metagenomics Scoring based on similarity between EMMs or EMM and input sequence Applications Classification Differentiation
  40. Profile EMMs for Organism Classification
  41. Profile EMM – E Coli
  42. Differentiating Strains Is it possible to identify different species of same genus? Initial test with EMM: Bacillus has 21 species Construct EMM for each species using training set (64%) Test by matching unknown strains (36%) and place in closest EMM All unknown strains correctly classified except one: accuracy of 95%
  43. Bibliography C. C. Aggarwal, J. Han, J. Wang, and P. S. Yu. A framework for clustering evolving data streams. Proceedings of the International Conference on Very Large Data Bases (VLDB), pp 81-92, 2003. D. Barbara, “Requirements for clustering data streams,” SIGKDD Explorations, Vol3, No 2, pp 23-27, 2002. Margaret H. Dunham, Donya Quick, Yuhang Wang, Monnie McGee, Jim Waddle, “Visualization of DNA/RNA Structure using Temporal CGRs,”Proceedings of the IEEE 6th Symposium on Bioinformatics & Bioengineering (BIBE06), October 16-18, 2006, Washington D.C. ,pp 171-178. S. Guha, A. Meyerson, N. Mishra, R. Motwani, and L. O'Callaghan, “Clustering data streams: Theory and practice,” IEEE Transactions on Knowledge and Data Engineering, Vol15, No 3, pp 515-528, 2003. Michael Hahsler and Margaret H. Dunham, “TRACDS: Temporal Relationship Among Clusters for Data Streams,” October 2009, submitted to SIAM International Conference on Data Mining. Jie Huang, Yu Meng, and Margaret H. Dunham, “Extensible Markov Model,” Proceedings IEEE ICDM Conference, November 2004, pp 371-374. Charlie Isaksson, Yu Meng, and Margaret H. Dunham, “Risk Leveling of Network Traffic Anomalies,” International Journal of Computer Science and Network Security, Vol 6, No 6, June 2006, pp 258-265. Charlie Isaksson and Margaret H. Dunham, “A Comparative Study of Outlier Detection,” July 2009, Proceedings of the IEEE MLDM Conference, pp 440-453. Mallik Kotamarti, Douglas W. Raiford, M. L. Raymer, and Margaret H. Dunham, “A Data Mining Approach to Predicting Phylum for Microbial Organisms Using Genome-Wide Sequence Data,” Proceedings of the IEEE Ninth International Conference on Bioinformatics and Bioengineering, pp 161-167, June 22-24 2009. Yu Meng and Margaret H. Dunham, “Efficient Mining of Emerging Events in a Dynamic Spatiotemporal,” Proceedings of the IEEE PAKDD Conference, April 2006, Singapore. (Also in Lecture Notes in Computer Science, Vol 3918, 2006, Springer Berlin/Heidelberg, pp 750-754.) Yu Meng and Margaret H. Dunham, “Mining Developing Trends of Dynamic Spatiotemporal Data Streams,” Journal of Computers, Vol 1, No 3, June 2006, pp 43-50. MIT Lincoln Laboratory.: DARPA Intrusion Detection Evaluation. http://www.ll.mit.edu/mission/communications/ist/corpora/ideval/index.html, (2008) M. Spiliopoulou, I. Ntoutsi, Y. Theodoridis, and R. Schult. MONIC: Modeling and monitoring cluster transitions. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, Philadelphia, PA, USA, pages 706–711, 2006.
  44. Thanks!
More Related