1 / 20

In Search of the Optimal WHT Algorithm

In Search of the Optimal WHT Algorithm. J. R. Johnson Drexel University. Markus Püschel CMU. http://www.ece.cmu.edu/~spiral. Abstract.

abel-dunlap
Download Presentation

In Search of the Optimal WHT Algorithm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. In Search of the Optimal WHT Algorithm J. R. Johnson Drexel University Markus Püschel CMU http://www.ece.cmu.edu/~spiral

  2. Abstract This presentation describes an approach to implementing and optimizing fast signal transforms. Algorithms for computing signal transforms are expressed by symbolic expressions, which can be automatically generated and translated into programs. Optimizing an implementation involves searching for the fastest program obtained from one of the possible expressions. We apply this methodology to the implementation of the Walsh-Hadamard transform. An environment, accessible from MATLAB, is provided for generating and timing WHT algorithms. These tools are used to search for the fastest WHT algorithm. The fastest algorithm found is substantially faster than standard approaches to implementing the WHT. The work reported in this paper is part of the SPIRAL project (see http://www.ece.cmu.edu/~spiral), an ongoing project whose goal is to automate the implementation and optimization of signal processing algorithms.

  3. The Walsh-Hadamard Transform n-fold “iterative” “recursive” Why WHT? Easy Structure Contains important constructor Close to 2-power FFT

  4. Influence of Cache Sizes(Walsh-Hadamard Transform) Runtime Quotients L1 L2 L1 L2 -Cache |signal| <= |L1|: iterative faster (less overhead) |signal| > |L1|: recursive faster (less cache misses) Pentium II, LINUX

  5. Increased Locality(Grid Algorithm) is computed + Local access pattern + Can be generalized to arbitrary tensor products - Conflict cache misses due to 2-power stride (if cache is not fully associative)

  6. Runtime/L1-DCache Misses Runtime L1 DCache Misses recursive iterative mixed grid 4-step grid 4-step vs. grid scrambling (dynamic data redistribution)

  7. Effect of Unrolling Runtime L1 ICache Misses L2 Cache Misses iterative: loops vs. unrolled Compose small, unrolled building blocks Pentium II, LINUX

  8. Class of WHT Algorithms Let N = N1   Nt, Ni = 2ni R = N; S = 1; for i = 1,…,t R = R/Ni; for j = 0,…,R-1 for k = 0,…,S-1 x(jNiS+k;S;jNiS+k+(Ni-1)S) = WHTNi x(jNiS+k;S;jNiS+k+(Ni-1)S); S = S*Ni;

  9. 4 4 1 3 1 1 1 1 1 2 4 1 1 1 3 1 1 1 Partition Trees • Each WHT algorithm can be represented by a tree, where a node labeled by n corresponds to WHT2n iterative recursive mixed

  10. Search Space • Optimization of the WHT becomes a search, over the space of partition trees, for the fastest algorithm. • The number of trees:

  11. Size of Search Space • Let T(z) be the generating function for Tn Tn = (n/n3/2), where =4+8  6.8284 • Restricting to binary trees Tn = (5n/n3/2)

  12. WHT Package • Uses a simple grammar for describing different WHT algorithms (allows for unrolled code and direct computation for verification) • WHT expressions are parsed and a data structure representing the algorithm (partition tree with control information) is created • Evaluator computes WHT using tree to control the algorithm • MATLAB interface allows experimentation

  13. WHT Package WHT(n) ::= direct[n] | small[n] | split[WHT(n1),…,WHT(nt)] # n1 + … + nt = n • Iterative split[small[1],small[1],small[1],small[1]] • Recursive split[small[1], split[small[1],split[small[1],small[1]]]] • Grid 4-step split[small[2],split[small[2],W(n-4)]]

  14. Code Generation Strategies • Recursive vs. Iterative data flow (improve register allocation) • Additional temps to prevent dependencies (aid C compiler)

  15. Dynamic Programming • Assume optimal WHT only depends on size and not stride parameters and state such as cache. Then dynamic programming can be used in search for the optimal WHT. • Consider all possible splits of size n and assume previously determined optimal algorithm is used for recursive evaluations • There are 2n-1 possible splits for W(n) and n-1 possible binary splits

  16. Generating Splits • Bijection between splits of W(n) and (n-1)-bit numbers 000  1111 = [4] 001  111|1 = [3,1] 010  11|11 = [2,2] 011  11|1|1 = [2,1,1] 100  1|111 = [1,3] 101  1|11|1 = [1,2,1] 110  1|1|11 = [1,1,2] 111  1|1|1|1 = [1,1,1,1]

  17. Sun Distribution 400 MHz UltraSPARC II

  18. Pentium Distribution 233 MHz Pentium II (linux)

  19. Optimal Formulas UltraSPARC [1], [2], [3], [4], [5], [6] [[3],[4]] [[4],[4]] [[4],[5]] [[5],[5]] [[5],[6]] [[4],[[4],[4]]] [[[4],[5]],[4]] [[4],[[5],[5]]] [[[5],[5]],[5]] [[[5],[5]],[6]] [[4],[[[4],[5]],[4]]] [[4],[[4],[[5],[5]]]] [[4],[[[5],[5]],[5]]] [[5],[[[5],[5]],[5]]] Pentium [1], [2], [3], [4], [5], [6] [7] [[4],[4]] [[5],[4]] [[5],[5]] [[5],[6]] [[2],[[5],[5]]] [[2],[[5],[6]]] [[2],[[2],[[5],[5]]]] [[2],[[2],[[5],[6]]]] [[2],[[2],[[2],[[5],[5]]]]] [[2],[[2],[[2],[[5],[6]]]]] [[2],[[2],[[2],[[2],[[5],[5]]]]]] [[2],[[2],[[2],[[2],[[5],[6]]]]]] [[2],[[2],[[2],[[2],[[2],[[5],[5]]]]]]]

  20. Different Strides • Dynamic programming assumption is not true. Execution time depends on stride.

More Related