belief propagation with information correction near maximum likelihood decoding of ldpc codes l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Belief-Propagation with Information Correction: Near Maximum-Likelihood Decoding of LDPC Codes PowerPoint Presentation
Download Presentation
Belief-Propagation with Information Correction: Near Maximum-Likelihood Decoding of LDPC Codes

Loading in 2 Seconds...

play fullscreen
1 / 27

Belief-Propagation with Information Correction: Near Maximum-Likelihood Decoding of LDPC Codes - PowerPoint PPT Presentation


  • 259 Views
  • Uploaded on

Belief-Propagation with Information Correction: Near Maximum-Likelihood Decoding of LDPC Codes. Ned Varnica + , Marc Fossorier # , Alek Kav čić + + Division of Engineering and Applied Sciences Harvard University # Department of Electrical Engineering University of Hawaii. Outline.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

Belief-Propagation with Information Correction: Near Maximum-Likelihood Decoding of LDPC Codes


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
belief propagation with information correction near maximum likelihood decoding of ldpc codes

Belief-Propagation with Information Correction: Near Maximum-Likelihood Decoding of LDPC Codes

Ned Varnica+, Marc Fossorier#, Alek Kavčić+

+Division of Engineering and Applied Sciences

Harvard University

#Department of Electrical Engineering

University of Hawaii

outline
Outline
  • Motivation – BP vs ML decoding
  • Improved iterative decoder of LDPC codes
  • Types of BP decoding errors
  • Simulation results
ldpc code graph

. . .

. . .

LDPC Code Graph
  • Parity check matrix
    • H
  • Bipartite Tanner code graph G = (V,E,C)
    • Variable (symbol) nodes vi  V, i = 0, 1, …, N-1
    • Parity check nodes cj C, j = 0 , 1, … , Nc-1

Nc x N

A non-zero entry in H

an edge in G

  • Code rate
    • R = k/N, k  N-Nc
  • Belief Propagation
    • Iterative propagation of conditional probabilities
standard belief propagation on ldpc codes
Standard Belief-Propagation on LDPC Codes
  • Locally operating
    • optimal for cycle-free graphs
      • Optimized LDPC codes (Luby et al 98, Richardson, Shokrollahi & Urbanke 99, Hou, Siegel & Milstein 01, Varnica & Kavcic 02)
    • sub-optimal for graphs with cycles
  • Good finite LDPC have an exponential number of cycles in their Tanner graphs (Etzion, Trachtenberg and Vardy 99)
  • Encoder constructions
  • BP to ML performance gap due to convergence to pseudo-codewords (Wiberg 95, Forney et al 01, Koetter & Vontobel 03)
examples
Examples
  • Short Codes

- e.g. Tanner code with N = 155, k = 64, diam = 6, girth = 8,dmin = 20

  • Long Codes
  • - e.g. Margulis Code with N = 2640k = 1320
goals
Goals
  • Construct decoder
    • Improved BP decoding performance
    • More flexibility in performance versus complexity
    • Can nearly achieve ML performance with much lower computational burden
      • Reduce or eliminate LDPC error floors
  • Applications
    • Can use with any “off-the-shelf” LDPC encoder
    • Can apply to any communication/data storage channel
subgraph definitions
Subgraph Definitions

^

x(L) {0,1}N

Definition 1:SUC graphGS(L) = (VS(L), ES(L), CS(L))is graph induced by SUCCS(L)

rRN

x {0,1}N

BCJR detector

BP decoder

channel

transmitted binary

vector

received

vector

decoded vector

after L iterations

^

  • Syndrome s = H x(L)
  • CS(L) - Set of unsatisfied check nodes(SUC) CS(L) = {ci :(Hx(L))i 0}
  • VS(L) - Set of variable nodes incident to c  CS(L)
  • ES(L) - Set of edges connecting VS(L)and CS(L)

^

  • dGs(v) - Degree in SUC graphGS(L)for v  V
    • dGs(v)  dG(v)
properties of suc graph
Properties of SUC graph

Observation 1: The higher the degree dGs(v) of a node v  Vs(L) the more likely is v to be in error

e.g. Statistics for Tanner (155,64) code blocksfor which BP failed on AWGN channel at SNR = 2.5 dB

  • Select v node
  • Perform information correction
node selection strategy 1
Node Selection Strategy 1

Strategy 1: Determine SUC graph and select the node with maximal degree dGs in SUC graphGS(L)

Select node v0 or v2 or v12

properties of suc graph cntd
Properties of SUC graph, cntd

Definition 2: Nodes v1 and v2 are neighbors with respect to SUC if there exist cCS(L) incident to both v1 and v2

CS(L)

  • nv(m) - number of neighbors of v with degree dGs = m

. . .

nv(2) = 1 and nv(1) = 4

. . .

Observation 2: The smaller the number of neighbors (wrt to SUC graph)

with high degree, the more likely v is to be in error

node selection strategy 2
Node Selection Strategy 2

Strategy 2: Among nodes with maximal degree dGs select a node with minimal number of highest degree neighbors

nv0(2)= nv12(2)= 1; nv2(2)= 2

nv0(1)= 4; nv12(1)= 6

Select node v0

alternatives to strategy 2
Alternatives to Strategy 2

max

  • dGs = max dGs(v)
  • Set of suspicious nodes Sv= {v : dGs(v) = dGs }
  • Edge penalty function r(v,c) =

(Nc- set of v nodes incident to c)

  • Penalty function R(v) =  r(v,c) –  r(v,c)
  • Select vp Svas vp = argmin R(v)
  • Numerous related approaches possible

v  V

max

max

max dGs(vn); if Nc \ {v}

vn Nc\{v}

0 ; if Nc \ {v} = 

c  Cs

c  Cs

max

max

v  Sv

node selection strategy 3
Node Selection Strategy 3
  • Decoder input on node vi
  • Memoryless AWGN channel:

Observation 3: A variable node v is more likely to be incorrect if its decoder

input is less reliable, i.e., if |O(v)| is lower

Strategy 3: Among nodes with maximal degree dGs select node with minimal input reliability|O(v)|

message passing notation
Message Passing - Notation
  • Set of log-likelihood ratios messages on v nodes:

M = (C,O)

  • Decoder input:

O = [O (v0 ), …, O (vN-1)]

  • Channel detector (BCJR) input

B = [B (v0 ), …, B (vN-1)]

. . .

. . .

C

. . .

V

. . .

. . .

O

. . .

. . .

T

T

T

symbol correction procedures

j = 1

j = 2

j = 3

7

3

8

1

9

4

10

start

11

5

12

2

13

6

14

Symbol Correction Procedures
  • Replace decoder and detector input LLRs corresponding to selected vp
    • O (vp) = +Sand B (vp) = +S
    • O (vp) = –Sand B (vp) = –S
  • Perform correction in stages
    • Test 2j combinations at stage j
    • For each test perform additional Kjiterations
    • Max number of attempts (stages) jmax
symbol correction procedures16
Symbol Correction Procedures
  • “codeword listing” approach
    • Test all 2jmax possibilities
    • W – collection of valid codeword candidates
    • Pick the most likely candidate
      • e.g. for AWGN channel set

x = argmin d(r,w)

j = 1

j = 2

j = 3

7

3

8

1

9

^

4

wW

start

10

  • “first codeword” approach
    • Stop at a first valid codeword
    • Faster convergence, slightly worse performance for large jmax

11

5

12

2

13

6

14

parallel and serial implementation j max 3

j = 1

j = 2

j = 3

j = 1

j = 2

j = 3

7

3

3

2

8

4

1

1

9

6

4

5

10

7

start

11

10

5

9

12

11

2

8

13

13

12

6

14

14

Parallel and Serial Implementation ( jmax= 3 )

start

complexity parallel implementation

j = 1

j = 2

j = 3

7

3

8

1

9

4

10

start

11

5

12

2

13

6

14

Complexity - Parallel Implementation
  • Decoding continued
    • M need to be stored
    • storage  (2jmax)
    • lower Kj required
    • “first codeword” procedure - fastest convergence
  • Decoding restarted
    • M need not be stored
    • higher Kjrequired
can we achieve ml

ML decoder

0

10

“codeword listing” procedure

original BP (max 100 iter)

-1

10

-2

10

WER

-3

10

-4

10

-5

10

0

0.5

1

1.5

2

2.5

3

3.5

4

E

/ N

[dB]

b

0

Can we achieve ML?

Fact 1: As jmax N, “codeword listing” algorithm with Kj = 0, for j < jmax, and Kjmax = 1 becomes ML decoder

  • For low values of jmax (jmax << N) performs very close to ML decoder
  • Tanner (N = 155, k = 64) code
  • jmax = 11, Kj = 10
  • Decoding continued
    • faster decoding
    • M need to be stored
  • ML almost achieved
pseudo codewords elimination
Pseudo-codewords Elimination
  • Pseudo-codewords compete with codewords in locally-operating BP decoding (Koetter & Vontobel 2003)
  • c - a codeword in an m-cover of G
  • i - fraction of time vi V assumes incorrect value in c
  •  = (0,1, …,N-1) - pseudo-codeword
  • pseudo-distance (for AWGN)
  • Eliminate a large number of pseudo-codewords by forcing symbol ‘0’ or symbol ‘1’ on nodes vp
    • Pseudo-distance spectra improved
    • Can increase min pseudo-distance if jmax is large enough
types of bp decoding errors
Types of BP decoding errors
  • Very high SNRs (error floor region)

Stable errors on saturated subgraphs:

      • decoder reaches a steady state and fails
      • messages passed in SUC graph saturated

Definition 3: Decoder D has reached a steady state in the interval [L1,L2] if Cs(L) = Cs(L1) for all L [L1,L2]

  • 2. Medium SNRs (waterfall region)
  • Unstable Errors:
      • decoder does not reach a steady state
suc properties in error floor region
SUC Properties in Error Floor Region

Theorem 1: In the error floor region

Corollary: For regular LDPC codes with

  • Information correction for high SNRs (error floor region)
    • Pros:
      • Small size SUC
      • Faster convergence
    • Cons:
      • dGsplays no role in node selection
simulation results

ML decoder

0

“codeword listing” procedure

10

“first codeword” procedure

original BP (max 100 iter)

-1

10

-2

10

WER

-3

10

-4

10

-5

10

0

0.5

1

1.5

2

2.5

3

3.5

4

E

/ N

[dB]

b

0

Simulation Results
  • Tanner (155,64) code
    • Regular (3,5) code
    • Channel: AWGN
    • Strategy 3
    • jmax = 11, Kj = 10
    • More than 1dB gain
    • ML almost achieved
simulation results24
Simulation Results
  • Tanner (155,64) code
    • Regular (3,5) code
    • Channel: AWGN
    • Strategy 3
    • “First codeword” procedure
    • jmax = 4,6,8 and 11
    • Kj = 10
simulation results error floors
Simulation Results – Error Floors
  • Margulis (2640,1320) code
    • Regular (3,6) code
    • Channel: AWGN
    • Strategy 3
    • “First codeword” procedure
    • jmax = 5, Kj = 20
    • More than 2 orders of magnitudes WER improvement
simulation results isi channels
Simulation Results – ISI Channels
  • Tanner (155,64) code
  • Channels:
    • Dicode (1-D)
    • EPR4 (1-D)(1+D)2
  • Strategy 2
  • jmax = 11, Kj = 20
  • 1dB gain
  • 20 % of detected errors are ML
conclusion
Conclusion
  • Information correction in BP decoding of LDPC codes
    • More flexibility in performance vs complexity
    • Can nearly achieve ML performance with much lower computational burden
    • Eliminates a large number of pseudo-codewords
      • Reduces or eliminates LDPC error floors
  • Applications
    • Can use for any “off-the-shelf” LDPC encoder
    • Can apply to any communication/data storage channel