linear time encodable and decodable error correcting codes daniel a spielman
Download
Skip this Video
Download Presentation
Linear-time encodable and decodable error-correcting codes Daniel A. Spielman

Loading in 2 Seconds...

play fullscreen
1 / 12

Linear-time encodable and decodable error-correcting codes Daniel A. Spielman - PowerPoint PPT Presentation


  • 131 Views
  • Uploaded on

Linear-time encodable and decodable error-correcting codes Daniel A. Spielman. Presented by Tian Sang Jed Liu 2003 March 3rd. Error-Reduction Codes. Weaker than error-correcting codes Can remove most of the errors, if not too many message bits and check bits are corrupted Definition

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Linear-time encodable and decodable error-correcting codes Daniel A. Spielman' - lionel-duffy


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
linear time encodable and decodable error correcting codes daniel a spielman

Linear-time encodable and decodable error-correcting codes Daniel A. Spielman

Presented by

Tian Sang

Jed Liu

2003 March 3rd

error reduction codes
Error-Reduction Codes
  • Weaker than error-correcting codes
  • Can remove most of the errors, if not too many message bits and check bits are corrupted
  • Definition

A code C of length n with rn message bits and (1-r)n check bits is an error-reduction code of rate r, error reductionε, and reducible distanceδ, if there is an algorithm, when given a codeword with v ≤ δn corrupt message bits and t ≤δn corrupt check bits, will output a word that differs from the uncorrupted message in at mostεt message bits

error correcting codes from error reduction codes
Error-correcting codes from error-reduction codes
  • C0: an error-correcting code of block length n0 , rate ¼, andδ/4 fraction of errors can be corrected
  • Rk: a family of error-reduction codes with n02k message bits, n02k-1 check bits,ε> ½, andδ> 0
  • Ck: block lengths n02k and rate ¼
  • Mk: the n02k-2 message bits of Ck
  • Ak: the n02k-3 check bits of encoding Mk using Rk-2
  • Bk: the 3n02k-3 check bits of encoding Ak using Ck-1
  • C’k: n02k-2 check bits of encoding AkU Bk using Rk-1
slide5
Lemma 2

(1) The codes Ck are error-correcting codes of block length n02k and rate ¼ from whichδ/4 fraction of errors can be corrected

(2) Ck are linear time encodable/decodable if Rk have linear time encoding algorithm and linear time error-reduction algorithm that will

(a) on input a word with corrupt messagebitsand check bits v, t ≤δn, output a word with at most max(v/2, t/2) corrupt message bits

(b) If v ≤δn and t = 0, output the codeword

without corrupt bits

slide6
Proof by induction

Base case is the code C0 of block length n0 , rate ¼, andδ/4 fraction of errors can be corrected. Obviously we can encode/decode C0 in constant time c

  • Encoding time of Ck

According to the assumption, Rk is linear time encodable/reductable, say c1n02k, c2n02k respectively

The time to encode Ck = the time to encode Rk-2

+ the time to encode Ck-1

+the time to encode Rk-1

=c1n02k-2 + (3c1n02k-2 + c) + c1n02k-1

=3c1n02k-1 + c (linear in the size of Ck)

slide7
Error-correction of Ck

There are not many errors in Ck(≤δ/4 fraction)

(1) Use C’k as check bits to reduce errors in AkUBk, then not many errors left in AkUBk(≤δ/8 fraction)

(2) In fact is a AkUBk codeword of Ck-1, by inductive hypothesis, Ck-1 can correct all left errors in AkUBk

(3) Since now Akis free of error, and not many errors in Mk(≤δ/4 fraction), we can use Ak as check bits to correct all errors in Mk (according to the assumption at (2)(b))

a simple construction
A simple construction

B is a (d, 2d) regular graph

slide9
Simple Sequential Error-Reduction Algorithm

Repeat

If there is a message bit that has more unsatisfied than satisfied neighbors, then flip that bit

Until no such message bit remains

  • Lemma 10

let B be a (c,d,α,3/4 d + 2) expander graph, if the algorithm above for R(B) is given a word x that differs from a codeword w of R(B) in at most v≤αn/2message bits and t ≤αn/2check bits, then the algorithm will output a word that differs from w in at most t/2 of its message bits

proof
Proof
  • This algorithm is very similar to the simple sequential algorithm for expander codes
  • First show ifαn≥ v≥ t/2, there is a node that has more unsatisfied than satisfied neighbors
  • Since each time the number of unsatisfied check bits decreases, we can prove αn ≥ v is always true. So the algorithm can only end up with v
  • Constant degrees, obviously in linear time.
slide11
Simple Parallel Error-Reduction Round
    • For each message bit, count the number of unsatisfied check bits among its neighbors
    • Flip each message bit that has more unsatisfied than satisfied neighbors
  • Lemma 13

Assume a word differs from a codeword w of R(B) in at most v≤αn/2message bits and t ≤αn/2check bits, then the round algorithm will output a word that differs from w in at most v(d-4)/d of its message bits

  • Simple Parallel Error-Reduction Algorithm

Iterate logd/(d-4)2 simple parallel error-reduction rounds

slide12
Theorem 15

From a family of (c,d,α,3/4 d + 4) expander graphs

between sets of n02k and n02k-1 vertices for all k ≥-1,

one can construct an infinite family of error-correcting

Codes that have linear-time encoding algorithms and

linear-time decoding algorithms that will correct an α/8

Fraction of error.

  • Problem

Such graphs can be only obtained through a

randomized construction.

ad