1 / 64

The Burrows-Wheeler Transform

Learn about the Burrows-Wheeler Transform (BWT), a lossless and reversible data transformation technique that can improve text compression. Understand how BWT works, its benefits, steps involved, and variants. Discover why BWT is useful for optimizing compression algorithms like Run-Length Encoding (RLE) and Huffman coding.

rosalbad
Download Presentation

The Burrows-Wheeler Transform

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Burrows-Wheeler Transform Sen Zhang

  2. Transform • What is the definition for “transform”? • To change the nature, function, or condition of; convert. • To change markedly the appearance or form of • Lossless and reversible • By the way, to transform is simple, a kid can do it. • To put them back is a problem. • Think of a 3 years old baby, he pretty much can transform anything, disassemble anything, but … • There exist efficient reverse algorithms that can retrieve the original text from the transformed text.

  3. What is BWT? • The Burrows and Wheeler transform (BWT) is a block sorting lossless and reversible data transform. • The BWT can permute a text into a new sequence which is usually more “compressible”. • Surfaced not long ago, 1994, by Michael Burrows and David Wheeler. • The transformed text can be better compressed with fast locally-adaptive algorithms, such as run-length-encoding (or move-to-front coding) in combination with Huffman coding (or arithmetic coding).

  4. Outline • What does BWT stand for? • Why BWT? • Data Compression algorithms • REL • Huffman coding • Combine them • What is left out? • Bring the reality closer to ideality • Steps of BWT • BWT is reversible and lossless • Steps to inverse • Variants of BWT • ST • When was BWT initially proposed? • Where are the inventors of the algorithms? • Your homework!

  5. Why BWT? • Run length encoding • Replacing a long series of a repeated character with a count of the repetition. Squeezing to a number and a character. • AAAAAAA • *A7 , * flag • Ideally, the longer of the sequence of the same character is, the better. • In reality, the input data, however, does not necessarily favor the expectation of the RLE method.

  6. Bridge reality and ideality • BWT can transform a text into a sequence that is easier to compress. • Closer to ideality (what is expected by RLE). • Compression on the transformed text improves the compression performance

  7. Preliminaries • Alphabet Σ • {a,b,c,$} • We assume • an order on the alphabet • a<b<c<$ • A character is available to be used as the sentinel, denoted as $.

  8. How to transform? • Three steps • Form a N*N matrix by cyclically rotating (left) the given text to form the rows of the matrix. • Sort the matrix according to the alphabetic order. • Extract the last column of the matrix.

  9. One example • how the BWT transforms mississippi. • T=mississippi$

  10. Step 1: form the matrix • The N * N symmetric matrix, MO, originally constructed from the texts obtained by rotating the text $T$. • The matrix OM has S as its first row, i.e. OM[1, 1:N]=T. • The rest rows of OM are constructed by applying successive cyclic left-shifts to T, i.e. each of the remaining rows, a new text T_i is obtained by cyclically shifting the previous text T_{i-1} one column to the left. • The matrix OM obtained is shown in the next slide.

  11. A text T is a sequence of characters drawn from the alphabet. • Without loss of generality, a text T of length $N$ is denoted as x_1x_2x_3...x_{N-1}$, where every character x_i is in the alphabet, Σ, for i in [1, N-1]. The last character of the text is a sentinel, which is the lexicographically greatest character in the alphabet and occurs exactly once in the text. • Appending a sentinel to the original text is not a must but helps simplifying the understanding and make any text nonrepeating. • abcababac$

  12. Step 1 form the matrix First treat the input string as a cyclic string and construct N* N matrix from it.

  13. Step 1: form the matrix m i s s i s s i p p i $ i s s i s s i p p i $ m s s i s s i p p i $ m i s i s s i p p i $ m i s i s s i p p i $ m i s s s s i p p i $ m i s s i s i p p i $ m i s s i s i p p i $ m i s s i s s p p i $ m i s s i s s i p i $ m i s s i s s i p i $ m i s s i s s i p p $ m i s s i s s i p p i

  14. Step 2: transform the matrix • Now, we sort all the rows of the matrix OM in ascending order with the leftmost element of each row being the most significant position. • Consequently, we obtain the transformed matrix M as given below. i p p i $ m i s s i s s i s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p m i s s i s s i p p i $ p i $ m i s s i s s i p p p i $ m i s s i s s i s i p p i $ m i s s i s s i s s i p p i $ m i s s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p i Completely sorted from the leftmost column to the rightmost column.

  15. Step 3: get the transformed text • The Burrows Wheeler transform is the last column in the sorted list, together with the row number where the original string ends up.

  16. Step 3: get the transformed text • From the above transform, L is easily obtained by taking the transpose of the last column of M together with the primary index. • 4 • L= s s m p $ p i s s i i i i p p i $ m i s s i s s i s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p m i s s i s s i p p i $ p i $ m i s s i s s i p p p i $ m i s s i s s i s i p p i $ m i s s i s s i s s i p p i $ m i s s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p i 4 • Notice how there are 3 i's in a row and 2 consecutive s's and another 2 consecutive s’s - this makes the text easier to compress, than the original string “mississippi$”.

  17. What is the benefit? • The transformed text is more amenable to subsequent compression algorithms.

  18. Any problem? • It sounds cool, but … • Is the transformation reversible?

  19. BWT is reversible and lossless • The remarkable thing about the BWT is not only that it generates a more easily compressible output, but also that it is reversible, i.e. it allows the original text to be re-generated from the last column data and the primary index.

  20. BWT is reversible and lossless mississippi$ BWT Index 4 and ssmp$pissiii ??? How to achieve the goal? Inverse BWT mississippi$

  21. The intuition • Assuming you are in a 1000 people line. • For some reason, people are dispersed • Now, we need to restore the line. • What should you (the people in line) do? • What is your strategy? • Centralized? • A bookkeeper or ticket numbers, that requires centralized extra bookkeeping space • Distributed? • If every person can point out who stood immediately in front of him. Bookkeeping space is distributed.

  22. For IBWT • The order is distributed and hidden in the output themselves!!!

  23. The trick is • Where to start? Who is the first one to ask? • The last one. • Finding immediate preceding character • By finding immediate preceding row of the current row. • A loop is needed to recover all. • Each iteration involves two matters • Recover the current people (by index) • In addition to that, to point out the next people (by index) to keep the loop running.

  24. Two matters • Recover the current people (by index) • L[currentindex], so what is the currentindex? • In addition to that, to point out the next people (by index) • currentindex = new index; • // how to update currentindex, we need a updating method.

  25. We want to know where is the preceding character of a given character. i p p i $ m i s s i s s i s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p m i s s i s s i p p i $ p i $ m i s s i s s i p p p i $ m i s s i s s i s i p p i $ m i s s i s s i s s i p p i $ m i s s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p i 4 Based on the already known primary index, 4, we know, L[4], i.e. $ is the first character to retrieve, backwardly, but our question is which character is the next character to retrieve?

  26. We want to know where is the preceding character of a given character. i p p i $ m i s s i s s i s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p m i s s i s s i p p i $ p i $ m i s s i s s i p p p i $ m i s s i s s i s i p p i $ m i s s i s s i s s i p p i $ m i s s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p i 4 We know that the next character is going to be ‘i’? But L[6]=L[9]= L[10] = L[11] =‘i’. Which index should be chosen? Any of 6, 9, 10, and 11 can give us the right character ‘i’, but the correct strategy also has to determine which index is the next index continue the restoration.

  27. We know that the next character is going to be ‘i’? • But L[6]=L[9]= L[10] = L[11] =‘i’. Which index should be chosen? • Any of 6, 9, 10, and 11 can give us the right character ‘i’, but the correct strategy also has to determine which index is the next index continue the restoration.

  28. The solution • The solution turns out to be very simple: • Using LF mapping! • Continue to see what LF mapping is?

  29. Inverse BW-Transform • Assume we know the complete ordered matrix • Using L and F, construct an LF-mapping LF[1…N] which maps each character in L to the character occurring in F. • Using LF-mapping and L, then reconstruct T backwards by threading through the LF-mapping and reading the characters off of L.

  30. L and F i p p i $ m i s s i s s i s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p m i s s i s s i p p i $ p i $ m i s s i s s i p p p i $ m i s s i s s i s i p p i $ m i s s i s s i s s i p p i $ m i s s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p i 4

  31. LF mapping i p p i $ m i s s i s s i s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p m i s s i s s i p p i $ p i $ m i s s i s s i p p p i $ m i s s i s s i s i p p i $ m i s s i s s i s s i p p i $ m i s s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p i 7 8 4 5 11 6 0 9 10 1 2 3 4

  32. Inverse BW-Transform:Reconstruction of T • Start with T[] blank. • Let u = NInitialize Index = the primary index (4 in our case) • T[u] = L[index].We know that L[index] is the last character of T because M[the primary index] ends with $. • For each i = u-1, …, 1 do: s = LF[s] (threading backwards) T[i] = L[s] (read off the next letter back)

  33. Inverse BW-Transform:Reconstruction of T • First step: s = 4 T = [.._ _ _ _ _ $] • Second step: s = LF[4] = 11 T = [.._ _ _ _ i $] • Third step: s = LF[11] = 3 T = [.._ _ _ p i $] • Fourth step: s = LF[3] = 5 T = [.._ _ p p i $] • And so on…

  34. Who can retrieve the data? • Please complete it!

  35. Why does LF mapping work? i p p i $ m i s s i s s i s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p m i s s i s s i p p i $ p i $ m i s s i s s i p p p i $ m i s s i s s i s i p p i $ m i s s i s s i s s i p p i $ m i s s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p i 7 8 4 5 11 6 0 9 10 1 2 3 4 ? Which one

  36. Why does LF mapping work? i p p i $ m i s s i s s i s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p m i s s i s s i p p i $ p i $ m i s s i s s i p p p i $ m i s s i s s i s i p p i $ m i s s i s s i s s i p p i $ m i s s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p i 7 8 4 5 11 6 0 9 10 1 2 3 4 ? Why not this?

  37. Why does LF mapping work? i p p i $ m i s s i s s i s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p m i s s i s s i p p i $ p i $ m i s s i s s i p p p i $ m i s s i s s i s i p p i $ m i s s i s s i s s i p p i $ m i s s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p i 7 8 4 5 11 6 0 9 10 1 2 3 4 ? Why this?

  38. Why does LF mapping work? i p p i $ m i s s i s s i s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p m i s s i s s i p p i $ p i $ m i s s i s s i p p p i $ m i s s i s s i s i p p i $ m i s s i s s i s s i p p i $ m i s s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p i 7 8 4 5 11 6 0 9 10 1 2 3 4 ? Why this?

  39. Why does LF mapping work? i p p i $ m i s s i s s i s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p m i s s i s s i p p i $ p i $ m i s s i s s i p p p i $ m i s s i s s i s i p p i $ m i s s i s s i s s i p p i $ m i s s s i p p i $ m i s s i s s i s s i p p i $ m i $ m i s s i s s i p p i 7 8 4 5 11 6 0 9 10 1 2 3 4 ? Why this?

  40. The mathematic explanation • T1=S1+P • T2=S2+P • If T1<T2, S1<S2 • Now, let us reverse S and P • P+S1= T1’ • P+S2=T2’ • Since S1<S2, we know T1’<T2’

  41. The secret is hidden in the sorting strategy the forward component. • Sorting strategy preserves the relative order in both last column and first column.

  42. We had assumed we have the matrix. But actually we don’t. • Observation, we only need two columns. • Amazingly, the information contained in the Burrows-Wheeler transform (L) is enough to reconstruct F, and hence the mapping, hence the original message!

  43. First, we know all of the characters in the original message, even if they're permuted in the wrong order. This enables us to reconstruct the first column.

  44. Given only this information, you can easily reconstruct the first column. The last column tells you all the characters in the text, so just sort these characters to get the first column.

  45. Inverse BW-Transform:Construction of C • Store in C[c] the number of occurrences in T of the characters {1, …, c-1}. • In our example: T = mississippi$ i 4, m 1, p 2, s 4, $ 1 C = [0 4 5 7 11] • Notice that C[c] + m is the position of the mth occurrence of c in F (if any).

  46. Inverse BW-Transform:Constructing the LF-mapping • Why and how the LF-mapping? • Notice that for every row of M, L[i] directly precedes F[i] in the text (thanks to the cyclic shifts). • Let L[i] = c, let ri be the number of occurrences of c in the prefix L[1,i], and let M[j] be the ri-th row of M that starts with c. Then the character in the first column F corresponding to L[i] is located at F[j]. • How to use this fact in the LF-mapping?

  47. Inverse BW-Transform:Constructing the LF-mapping • So, define LF[1…N] as LF[i] = C[L[i]] + ri. • C[L[i]] gets us the proper offset to the zeroth occurrence of L[i], and the addition of ri gets us the ri-th row of M that starts with c.

  48. Inverse BW-Transform • Construct C[1…|Σ|], which stores in C[i] the cumulative number of occurrences in T of character i. • Construct an LF-mapping LF[1…N] which maps each character in L to the character occurring in F using only L and C. • Reconstruct T backwards by threading through the LF-mapping and reading the characters off of L.

  49. Another example • You are given and input string “ababc” (a) Using Burrows-Wheeler, create all cyclic shifts of the string (b) sorted order(b) Output L and the primary index.(g) Given L, determine F and LF (and show how you do it).(h) Decode the original string using indexX, L, and LF (and show how you do it).

  50. Pros and cons of BWT • Pros: • The transformed text does enjoy a compression-favorable property which tends to group identical characters together so that the probability of finding a character close to another instance of the same character is increased substantially. • More importantly, there exist efficient and smart algorithms to restore the original string from the transformed result. • Cons: • the need of sorting all the contexts up to their full lengths of $N$ is the main cause for the super-linear time complexity of BWT. • Super-linear time algorithms are not hardware friendly.

More Related