1 / 27

Hypergraph Sparsification and Its Application to Partitioning

Hypergraph Sparsification and Its Application to Partitioning . Mehmet Deveci 1,3 , Kamer Kaya 1 , Ümit V. Çatalyürek 1,2 1 Dept. of Biomedical Informatics, The Ohio State University 2 Dept. of Electrical & Computer Engineering, The Ohio State University

riva
Download Presentation

Hypergraph Sparsification and Its Application to Partitioning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hypergraph Sparsification and Its Application to Partitioning Mehmet Deveci1,3, Kamer Kaya1, Ümit V. Çatalyürek1,2 1Dept. of Biomedical Informatics, The Ohio State University 2Dept. of Electrical & Computer Engineering, The Ohio State University 3Dept. of Computer Science & Engineering, The Ohio State University

  2. Motivation • Problem: Sparsification of large-scale data modeled as a hypergraph for a scalable computation and analysis • Today data is big and its utilization and analysis require complex algorithms and immense amount of computing power. • The techniques to make the data smaller are very important. • We should avoid any redundancy in the data and we can even sacrifice some part of it to reduce the size. • Application (in this work): Hypergraph partitioning • Used in many problems in parallel scientific computing such as sparse matrix reordering, static and dynamic load balancing, clustering, and recommendation. Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  3. Contribution • Proposed hypergraphsparsification techniques • Identical net removal • Already exist in some partitioning tools but our implementation is faster • Identical vertex removal • Similar net removal • To our best knowledge, there is no work that analyzes the effectiveness of the sparsification on hypergraphs. • Implemented under UMPa [Catalyurek12], a multi-objective hypergraphpartitioner Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  4. Hypergraph Partitioning • Hypergraph H=(V,N) • V: vertex set, N: net set • c(n): cost of a net • w(v): weight of a vertex • n1,n2, n3and n5,n6 are identical nets • v2, v4are identical vertices. • Objective: Partition the hypergraph • Balanced load distribution • Minimized communication between parts Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  5. Partitioning Example • Partitioning criteria: • Communication volume and partitioning time. • Better volume reduces the parallel execution time. • However, partitioning time can dominate application time. • We want to reducethe partitioning time by sparsification P2 P1 Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  6. Multi-level Approach • Three phases: • Coarsening: obtain smaller and similar hypergraphs to the original • Initial partitioning: find a solution for the smallest hypergraph. • Uncoarsening: project the initial solution to the finer hypergraphs and refine it iteratively until a solution for the original hypergraph obtained.  Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  7. Identical Net Removal (INR) • Two nets are identical if their pin sets are the same • Pairwise comparison is very expensive • Instead, we use hashing • If two nets are identical the sum of their pin id’s must be identical • Calculate a hash value for each net, and compare only the ones with the same hash value • Choose one representative net for an identical net set Coarsening sparsifies the vertices. INR is done after coarsening level. Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  8. INR – Hash Function • Hash functions: • Murmur Hash [Appleby12] • The quality of the hash function depends on the number of collusions, e.g., for two nets n1and n2which are not identical nets • False-positive cost: Number of pairwise comparisons for non-identical nets • Checksum occupancy: The average number of representatives having the same checksum value Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  9. INR – Variants • INR-SRT: Calculates hash values for each net, then sorts it w.r.t. hash values. • Reduces the false-positive, and occupancy rate. • However, sorting can be expensive. • INR-MEM: Uses two arrays first and next to storethe hash values in a linked list structure. Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  10. Hashing Example Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  11. Hashing Example CS1(n1) = 1+3 = 4 Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  12. Hashing Example CS1(n2) = 1+3 = 4 Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  13. Hashing Example CS1(n3) = 1+3 = 4 Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  14. Hashing Example CS1(n4) = 2+3+4 = 9 9 mod 7 = 2 Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  15. Hashing Example CS1(n5) = 2+4+5 = 11 11 mod 7 = 4 Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  16. Hashing Example CS1(n6) = 2+4+5 = 11 11 mod 7 = 4 c(n1’) = c(n1)+c(n2)+c(n3) c(n5’) = c(n5)+c(n6) Occupancy = (2 + 1) / 2 = 1.5 Occupancy = (1 + 1+1) / 3 = 1 Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  17. Identical Vertex Removal (IVR) • Two vertices are identical if they are connected to the same nets. • Same methods applied to INR • Although INR does not affect the partitioning result, IVR affects the quality of the partitioning by taking early decisions on the part assignments. • Coarsening sparsifies the identical vertices during coarsening. There is no need for IVR. But IVR performed at the beginning of the coarsening can reduce its execution time. Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  18. Similar Net Removal (SNR) • INR aims to remove the redundancy from hypergraph. • only effective when identical nets, i.e., redundancy, exist • SNR removes the similar nets even when there is no redundancy • Lossycompression technique. • Usually worsen the quality, but makes the partitioning faster. • When the performance of the application is not very sensitive against small changes in partitioning quality, this trade/off can be useful. Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  19. Similar Net Removal (SNR) • The similarity between two nets niand njis defined with Jaccard Coefficient: • Since the number of nets is large, it is infeasible to compute the similarity for each net pair. • Instead, compute a footprint of each net using minhash Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  20. Similar Net Removal (SNR) • σ is a random permutation of the integers from 1 to |V|, and minσ(n) is the first vertex id of a net n ∈ N under the permutation σ. • We use t permutations σ1 to σt to obtain a minwise footprint of each net. • Two nets niand njare similar ifftheir minwise footprints are identical, where mf(n) = (minσ1 (n), . . . , minσt (n)). • We do the hashing and pairwise comparison only for this minwise footprint set, and choose one of the nets as the representative of this set. • Large (LRG): representative is the net with the largest number of pins. • Important (IMP): when calculating the pin count, prioritizing the pins which are connected to heavy nets. • Union (UNI): representative is a virtual net that is connected to all pins of the nets in the set. Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  21. Experiments • All the algorithms are implemented in UMPa. • g++ version 4.5.2 and –O3 flag. • Intel Xeon E5520 (quad-core clocked at 2.27 Ghz) • 48 GBs of Memory • 28 matrices from different matrix classes. • K = 2, 8, 32, 128, 512, 1024 Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  22. Hash Function Comparison • Quality is better with INRSRT, as no limit on hash-size. • Except CS1, all other has an occupancy value close to 1 (optimal occupancy). • INRMEM equipped with CS2 has best performance. • Checksum function is as good as CS3 and MurmurHash. • Computationally cheaper. Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  23. Improvement on Time and CV • 1.18 to 3.30 speedups for INR+IVR • 0.3%–2.4% quality improvement on average. • The speedup values are increasing with K • promising as the overhead of the partitioning problem is usually an issue for large K values. • Most of the speedup is obtained with INR, as not all hypergraphs contain identical vertices. • 14/28 of the matrices in the test set have less than 103 identical vertices Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  24. SNR improvement w.r.t INR+IVR • 4 permutation array (t=4). • SNR-P4-X restricts the removal process to only the nets with 4 or more pins • SNR-X and SNR-P4-X where X is a representative selection method. • On 1024 processor • SNR- LRG  22% improvement on time, 5% harm on CV • 4.2 speedup w.r.t. Base • 4% reduction on CV • SNR-P4-LRG15% improvement on time, 2% harm on CV • 3.9 speedup w.r.t. Base • 2% reduction on CV Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  25. Conclusion • We proposed heuristics for lossless and lossy hypergraph sparsification. • We show that the effectiveness of the heuristic increases with the number of part numbers. This is promising as partitioning overhead is an issue for today’s architectures with large number of processors. Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  26. References • Catalyureket al. "UMPA: A Multi-objective, multi-level partitioner for communication minimization." Graph Partitioning and Graph Clustering (2012). • A. Appleby, “SMHasher & MurmurHash,” 2012, http://code. google.com/p/smhasher/. Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

  27. Thanks • For more information • Email umit@bmi.osu.edu • Visit http://bmi.osu.edu/~umit or http://bmi.osu.edu/hpc • Acknowledgement of Support Deveci et al. "Hypergraph Sparsification and its Application to Partitioning"

More Related