1 / 29

Channel Coding

Channel Coding. Modulation. Demodulation. Regeneration. Baseband. Passband. audio video (analogue) data (digital). A/D. Channel Code. anti-alias filter. •Nyquist sampling • 6dB / bit. FEC ARQ parity block convolution. pulse shaping filter ISI. ASK FSK PSK binary

melita
Download Presentation

Channel Coding

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Channel Coding Data Communication, Lecture 11

  2. Modulation Demodulation Regeneration Baseband Passband audio video (analogue) data (digital) A/D Channel Code anti-alias filter •Nyquist sampling • 6dB / bit • FEC • ARQ • parity • block • convolution • pulse • shaping • filter • ISI • ASK • FSK • PSK • binary • M’ary • bits/symbol channel filter Source Ts = symbol pulse width Bmin = 1 / 2Ts B = Bmin (1 + a) • Communications • Channel • loss • interference • noise • distortion Transmit Receive M = bits / symbol C = 2 B log2 M C = B log2 (S/N + 1) data (digital) audio video (analogue) D/A Decode low pass filter quantisation noise • FEC • ARQ • parity • matched filter • decision threshold • timing recovery • envelope • coherent • carrier recovery channel filter Basic Digital Communications System Data Communication, Lecture 11

  3. Error Correction Coding Error Detection Coding Cryptography (Ciphering) Source Coding Compression Coding Line Coding Error Control Coding = FEC - no feedback channel - quality paid by redundantbits - used in ARQ as in TCP/IP - feedback channel - retransmissions - quality paid by delay Taxonomy of Coding - Makes bits equal probable - Strives to utilize channel capacity by adding extra bits - for baseband communications - RX synchronization - Spectral shaping for BW requirements - error detection - Secrecy/ Security - Encryption (DES) - Redundancy removal: - Destructive (jpeg, mpeg) - Non-destructive (zip) FEC: Forward Error Correction ARQ: Automatic Repeat Request DES: Data Encryption Standard Data Communication, Lecture 11

  4. Source Coding • Subject of coding for digital communications requires very good knowledge of mathematics • Purpose of source coding: to transform the information in the source into a form best suited for transmission. • Usually, now, an algorithm to realise bit or symbol content compression in addition to (waveform) A/D conversion Data Communication, Lecture 11

  5. Current Algorithms • Image compression algorithms • MPEG – moving image • JPEG – static image • Voice & Music • Less well standardised… • GSM codecs / MP3 • Complementary coder / decoder Data Communication, Lecture 11

  6. Channel Coding • Is applied to communication links to improve the reliability of the information being transferred. • By adding additional bits to the data stream – which increases the amount of data to be sent – but enable us to detect and even correct errors at the receiver. Data Communication, Lecture 11

  7. Repetition Coding • In repetition coding bits are simply repeated several times • Can be used for error correction or detection • Assume binomial error distribution for n bits: • Example: In 3-bit repetition coding encoded word is formed by a simple rule: • Code is decoded by majority voting, e.g. for instance • Error in decoding is introduced if all the bits are inverted (a code word is swapped into another code word) or two bits are inverted (code word damaged resulting its location on a wrong side of decoding decision region) Data Communication, Lecture 11

  8. Parity-check Coding • Note that repetition coding can greatly improve transmission reliability because • However, due to repetition transmission rate is reduced. Here the code rate was 1/3 (that is the ration of the bits to be coded to the encoded bits) • In parity-check encoding a check bit is formed that indicates number of “1”:s in the word to be coded • Even number of “1”:s mean that the the encoded word has always even parity • Example: Encoding 2-bit words by even parity is realized byNote that for error detection encoded word parity is checked (how?) Data Communication, Lecture 11

  9. Parity-check Error Probability Data Communication, Lecture 11

  10. Parity-check Error Probability (cont.) • Hence we note that parity checking is a very efficient method of error detection: Example: • At the same time information rate was reduced by 9/10 only • Most telecommunication channels are memoryless AWGN channels or fading multipath channels • In memoryless channels bits are considered to be independent • In fading multipath channels transfer function changes as a function of time • Interleaving can be used to make bits more independent This can be realized by • block interleaving • convolutional interleaving • Problem of interleaving is delay that is 50% smaller for convolutional interleaving. Also their memory requirement is 50% smaller. Data Communication, Lecture 11

  11. received power time Reception after fading channel Block Interleaving • In fading channels received data can experience burst errors that destroy large number of consecutive bits. This is harmful for channel coding • Interleaving distributes burst errors along data stream • A problem of interleaving is introduced extra delay • Example below shows block interleaving: Received interleaved data: 1 0 0 0 1 1 1 0 1 0 1 1 1 0 0 0 1 1 0 0 1 1 0 0 0 1 1 1 0 1 0 1 1 1 0 0 0 1 1 0 0 1 Block deinterleaving : Recovered data: 1 0 0 0 1 0 0 0 1 0 1 1 1 1 0 1 1 0 1 0 1 Data Communication, Lecture 11

  12. Basic Approaches • Error detection – involves recognising that part of the received information is in error and requesting a repeat transmission. • ARQ [Automatic Repeat Request] • Error detection and correction – possible with added complexity and without re-transmission. • FEC [Forward Error Correction] Data Communication, Lecture 11

  13. Types of ARQ • Stop and Wait ARQ – simplest, transmitter waits after each message for acknowledgement of correct reception (known as ACK). If an error detected receiver sends NAK. New messages must be stored in transmitter buffer while this takes place. • Go Back N ARQ – transmitter continues until a NAK is received which identifies which message was in error. Transmitter back tracks and starts again form this point Data Communication, Lecture 11

  14. Types of ARQ…2 • Selective ARQ – a more complex protocol (set of rules) with a buffer in the receiver also. Receiver informs transmitter of specific message or packet that is in error and this only is re-transmitted. Receiver then inserts correction in sequence. • Although complex this is widely used as it maximises the throughput of the transmission channel. Data Communication, Lecture 11

  15. encoded bit blocks are function of adjacent bit blocks convolutional codes encoded bit blocks are formed from each data word separately block codes systematic codes code word and check bits appear separately in encoded word non-systematic codes Forward Error Correction Coding (FEC) code radix binary {0,1} ternary {-1,0,1} octal {0,1,...7}... linear codes non-linear codes - sum of any two non-zero codes yields another code word - contains all-zero code word - additive inverse: there must be a code that, when summed to another code word, yields all-zero code word Data Communication, Lecture 11

  16. DigitalTransmission Transmitted power; bandpass/baseband signal BW Information: - analog:BW & dynamic range - digital:bit rate Information source Message estimate Message Information sink Source encoder Maximization of information transferred Source decoder In baseband systems these blocks are missing Channel Encoder Channel decoder Message protection & channel adaptation; convolution, block coding Interleaving Deinterleaving Modulator Demodulator Fights against burst errors Received signal(may contain errors) Transmitted signal Channel M-PSK/FSK/ASK..., depends on channel BW & characteristics wireline/wireless constant/variable linear/nonlinear Noise Interference Data Communication, Lecture 11

  17. Representing Codes by Vectors • Hamming distance d(X,Y) is the number of bits that are different in code words • Code strength is measured by minimum Hamming distance: • Codes are more powerful when their minimum Hamming distance dmin (over all codes in the code family) is as large as possible • Codes can be mapped into n-dimensional grid: k bits in n bits out (n,k) encoder Data Communication, Lecture 11

  18. Hamming Distance and Code Error Detection and Correction Capability • Channel noise can produce non-valid code words that are detected at the receiver • Number of non-valid code words depends on dmin, and consequently the number of errors that can be detected at the reception isIf l +1 errors produced (for instance by noise/interference), received code word is transformed into another code word • The number of errors that can be corrected isIf more bit errors than t is produced, maximum likelihood detection can not decide correctly in which decision region received code belongs (denotes the integer part) Data Communication, Lecture 11

  19. Code Efficiency and the Largest Minimum Distance • The largest possible minimum distance is achieved by repetition codes that iswhere n and k are the number of bits in the encoded word and in the word to be encoded respectively. Code rate is a measure of code efficiency and it is defined by • Note that q=n - k bits are added for error detection/correction • We noticed that repetition codes have a very low efficiency because their rate is only 1/n. On the contrary, parity check codes have much higher efficiency of (n-1)/n and and they have still a relatively good performance. k bits in n bits out (n,k) encoder Data Communication, Lecture 11

  20. Hard and Soft Decision Decoding • Hard decision decoding • each received code word is compared to the applied codes and the code with the minimum Hamming distance is selected • Soft decision decoding • decision reliability is estimated from demodulator’s analog voltage after matched filter by Euclidean distance • Finally, the code word with the smallest distance with respect of the all the possible code words is selected as the received code word 0 1 1 1 0 <-Transmitted code word +1.0 Decision threshold +0.6 +0.4+0.2 Decision voltage after matched filter -0.2 0 0 1 1 0 Hard decisions -> Hamming distance to TX code word=1 Soft decision weighting -> Euclidean distance to TX code word=0.76 Data Communication, Lecture 11

  21. Hard and Soft Decoding (cont.) • Hard decoding: • calculates Hamming distances to all allowed code words • selects the code with the smallest distance • Soft decoding: • calculates Euclidean distance to all allowed code words • selects the code with the smallest distance • Soft decoding yields about 2 dB improvement when compared to hard decisions (requires a high SNR) • Often soft decoding realized by Viterbi decoder. In fading channels bit stream is interleaved to avoid effect of burst errors • Computational complexity in decoding can be reduced by using convolutional codes Data Communication, Lecture 11

  22. Basics of Block Coding • The terminology for block coding is that an input block of k bits give rise to an output block of n bits; this is called an (n,k) code. • The increase in block length means that the useful data rate (information transfer rate) is reduced by a factor k/n; this is known as the rate of the code. • The factor 1 - k/n is known as the redundancy of the code Data Communication, Lecture 11

  23. Hamming Code Example Data Communication, Lecture 11

  24. Hamming Codes Named after their discoverer R.W. Hamming • A rate 4/7 Hamming code as example • Each of the 16 possible four-bit input blocks is coded into a 7-bit output block • The 16 output blocks are chosen from 27 = 128 possible seven bit patterns as being most dissimilar. Each differs by 3 bits. • If 1 error occurs receiver can correct it • If 2 errors occur receiver can recognise it • If 3 errors occur they go undetected Data Communication, Lecture 11

  25. Block Code Families • Hamming codes are a subset of the more general code family known as BCH (Bose-Chaudhuri-Hocquenhem) codes discovered in 1959 and 1960. • Whereas Hamming can detect up to 2 or correct 1 error general BCH codes can correct any number of errors if the code is long enough, e.g. (11,1023) can correct 255 errors – used in deep space probes. Data Communication, Lecture 11

  26. Block Codes k bits in n bits out (n,k) encoder • In (n,k) block codes each sequence of k information bits is mapped into a sequence of n(>k) channel inputs in a fixed way regardless of the previous information bits (in contrast to convolutional codes) • The formed code family should be formed such that the code minimum distance and code rate is large -> high error correction/detection capability • A systematic block code: In the encoded word • the first k elements are the same as the message bits • q=n-k bits are the check bits • Therefore a code vector of a block code isor as a partitioned representation Data Communication, Lecture 11

  27. Generated Hamming Codes • Going through all the combinations of the input vector X, yields then all the possible output vectors • Note that for linear codes the minimum distance of each code word is the weight w orthe number of “1” on each code word (distance to ‘00...0’ code word) • For Hamming codes the minimum distance is 3 Data Communication, Lecture 11

  28. Decoding Block Codes by Hard Decisions • A brute-force method for error correction of a block code would include comparison to all possible same length code structures and choosing the one with the minimum Hamming distance when compared to the received code • In practice applied codes can be very long and the extensive comparison would require much time and memory. For instance, to get 9/10 code rate with a Hamming code requires that • This equation fulfills if the code length is at least k=57, that results n = 10k/9=63. • There are different block codes in this case! -> Decoding by direct comparison would be quite unpractical! Data Communication, Lecture 11

  29. Convolutional Coding • Operates serially on the incoming bit stream and the output block of n code digits generated by the coder depends not only on the block of k input digits but also, because there is memory in the coder the previous K input frames. The property K is known as the constraint length of the code. • An optimum decoding algorithm, called Viterbi decoding, uses a similar procedure. Data Communication, Lecture 11

More Related