1 / 22

Trellis complexity and symmetry

Trellis complexity and symmetry. Complexity measures State complexity: Number of states State space dimension profile (  0 ,  1 ,…,  n )  max Branch complexity: Number of branches The Wolf bound  max  { k , n – k }. The trellis of the dual code.

zihna
Download Presentation

Trellis complexity and symmetry

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Trellis complexity and symmetry • Complexity measures • State complexity: Number of states • State space dimension profile (0, 1,…, n) • max • Branch complexity: Number of branches • The Wolf bound • max  {k, n – k}

  2. The trellis of the dual code • C : an [n,k] linear code, C* its dual code (an [n,n-k] linear code) • i(C) and i(C*) the state spaces at time i of C and C*, resp. • Theorem: |i(C)| = |i(C*)| (i. e. i= i* ), for all i: 0  i  n. • Proof: • States in i(C*) corresponds to a coset in the partition p0,i(C*)/C*,tr0,i • i* = k(p0,i(C*)) – k(C*,tr0,i) • p0,i(C*) is generated by Hi, the parity check matrix for Ctr0,i. Thus p0,i(C*) and Ctr0,i are dual codes, and p0,i(C) and C*,tr0,i are also dual codes. Thus • i* = k(p0,i(C*)) – k(C*,tr0,i) = (i - k(Ctr0,i) – (i – k(p0,i(C))) = k(p0,i(C)) - k(C0,i) = k - k(Ci,n) - k(C0,i) = i. • Note: State complexities equal; branch complexities different

  3. Example [16,11] RM [16,5] RM

  4. Minimal trellises • In principle there may be different trellises corresponding to the same code • A minimal trellis for a given code is a trellis such that is state complexity is (’0, ’1,…, ’n) and, for any other trellis for the code with state complexity (0, 1,…, n) it holds that for for all i: 0  i  n, ’i i . • A minimal trellis is unique up to isomorphism. • In general, for a given code and for a given order of the coordinates, it is simple to find a minimal trellis (provided the trellis itself is simple). In fact, i is given by the dimensions of the past and future subcodes, which is fixed for any i. • If we are free to play with the order of the coordinates the problem of finding an optimum bit ordering is much more challenging.

  5. Bit ordering Permuting the order of the coordinates does not change • Code rate • Distance properties but may change • the dimensions of future and past subcodes at any time instant, and may thus change the trellis complexity. An optimum permutation • gives the smallest possible state space at each time instant • Hard to find; • Known for some classes of codes, unknown for others • may not even exist for a given code

  6. Branch complexity Example: (8,4) RM • Total number of branches in trellis • Determines (to a large extent) the workload on a trellis based decoding algorithm • Let Ii=2 if there is an information symbol corresponding to output bit i, Ii =1 otherwise. • Then the branch complexity is • iIi2i

  7. Cyclic codes • A linear code with the property that any rotational shift of a codeword is also a codeword • Generated by generator polynomial g(x) = 1 + g1x+ g2x2 +…+ gn-k-1xn-k-1 +xn-k • The corresponding generator matrix occurs in a natural TO form:

  8. Cyclic codes (cont.) • Time span (gi) = [i,n-k+i+1], bit span (gi) = [i,n-k+i] • If k > n-k, then (0, 1,…, n) = (0,1,…,n-k-1,n-k, n-k,…, n-k,n-k-1,…,1,0) • If k  n-k, then (0, 1,…, n) = (0,1,…,k-1,k, k,…, k,k-1,…,1,0) • Cyclic codes meet the Wolf bound with equality (worst possible)

  9. Symmetry considerations • Assume that a TOGM has the following symmetry property: • For each row g with bit span (g) = [a,b], there is alsoa row g’ with bit span (g’) = [n-1-b,n-1-a]. • Then the trellis possess a mirror symmetry, and in particular i = n-i .

  10. Trellis sectionalization • So far we have considered n-section trellises, i. e. one code bit per trellis section. • Sectionalization combines adjacent bits and provides a trellis with fewer time instances, in order to simplify the decoders. • Let ={t0, t1,…,t}, for   n. Delete all state spaces (and their adjacent branches) at time instances not in , and connect every pair of states, one in tj and one in tj+1, iff there were a path between these states in the original trellis, label the new branches by the old path labels. • Uniform sectionalization: all sections of the same length • Sectionalization may • Reduce state complexity • Increase branch complexity

  11. Sectionalization: Example

  12. Parallel decomposition • The sectionalized trellis may be densely connected, making a hardware implementation difficult • The motivation for parallel decomposition is to select trellis subsections to consist of structurally identical components which are not interconnected. • Code C with TOGM G, let g be a row of G, let G1=G\{g}. • G TOF  G1 TOF • i(C1)= i (C)-1 for ia(g), i(C1)= i (C) for i  a(g). • Imax (C) = {i: i (C) = max (C)} • Thm 9.1: If there exists a row g such that a(g) Imax (C), then sectionalizationproduces two parallel and identically structured subtrellises, one for C1 and one for C1 +g, such that • max (C1)= max (C) - 1, and • Imax (C1) = Imax (C)  {i: i (C) = max (C) - 1}

  13. Example • Thus, it is possible to decompose trellis in smaller independent components

  14. Generalization of Thm 9.1 • R(C) = {g: a(g) Imax (C)} • Theorem 9.2: Theorem 9.1 can be applied repeatedly: • Corollary 9.2.1: Max number of parallel isomorphic subtrellises such that the new total space dimension does not exceed max (C) is upper bounded by 2|R(C)| • Corollary 9.2.2: In a minimal -section trellis with section boundary locations in ={t0, t1,…,t}, the log2 of the number of parallel isomorphic subtrellises is equal to the number of rows in a TOGM with active time spans containing the section boundary locations in ={t0, t1,…,t}

  15. Low weight subtrellises • A code with (ordered) codeword weights 0, w1, w2, …, wm =n • The wt–weight subtrellis is obtained by pruning from the original trellis • every state not involved in a codeword of weight wt • Every branch not involved in a codeword of weight wt • If necessary split the zero state at each time instant into two states, to avoid codewords of weights larger than wt that merge with and diverge from the zero state in the original trellis. (Discuss) • Useful for devising a suboptimum (but simpler) decoding algorithm

  16. Cartesian product • Trellis construction for codes designed by combining several component codes by • Interleaving • Product • Direct-sum • Cartesian product of original trellises

  17. Interleaved codes • C = C1*C2*…*Cformed by interleaving • n-section trellises for each component code • In the trellis for the interleaved code • The state space is the cartesian product of the original component state spaces • Branch in the new trellis if there is a branch in a component trellis linking corresponding component states

  18. Interleaved codes: Example Note: This is utterly impractical – use the simpler trellises! Can decode each code individually (except on a burst channel????)

  19. Product codes • C = C1C2 formed by a product construction: • Form a matrix where each of the k2n1-bit rows  C1 • Then append parity bits to the columns so that each  C2 • An n1 -section trellis for the product code can be formed by cartesian product • Again totally impractical...No; this code needs to be decoided in full

  20. Direct sum construction • Component codes C1 and C2 each of length n • Direct sum code C formed by C = C1C2 = {u+v: u C1, v C2 } • Trellis: Cartesian product again

  21. Direct sum : example

  22. Suggested order of exercises Page 390 9.1 9.3 9.4 9.7 9.11 9.13 9.14 9.15 …

More Related