1 / 17

SO headers based on CRC Functionality and comparisons to a Keyword approach

SO headers based on CRC Functionality and comparisons to a Keyword approach. Lars-Erik Jonsson (Ericsson) ROHC session @ IETF 48 2000-08-03. Purposes of CRC in SO headers. Guarantee the correctness of all decompressed headers and catch all possible errors, such as: Long loss events

Download Presentation

SO headers based on CRC Functionality and comparisons to a Keyword approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SO headers based on CRCFunctionality and comparisons to a Keyword approach Lars-Erik Jonsson(Ericsson) ROHC session @ IETF 48 2000-08-03

  2. Purposes of CRC in SO headers • Guarantee the correctness of all decompressed headers and catch all possible errors, such as: • Long loss events • Residual bit errors • Errors introduced by external decompressor mechanisms (e.g. timers and wall clocks) • Continuously move the context forward • Make special decompressor features possible to implement (e.g. reverse decompression)

  3. Clarification of questioned issues • These issues have been pointed out on the mailing list and are therefore paid extra attention here even if they are not the most important issues with the CRC approach • Remember that normally: • there will be no errors to detect • loss of several consecutive packets is very uncommon • the CRC verifies the correctness of the header and makes it possible to move the context forward

  4. Issue 1 - Long loss detection 1(3) • How are long loss events detected?? • A long loss (elevator case) is when 12-16 (depending on how things are interpreted) consecutive packets are lost, and the probability for that is very low ( P[long loss] ) • An error may occur if the CRC fail to detect that long loss has happened ( P[undetected] ). Simulations have shown that the probability for this is much lower than 1/8 for three bits of CRC (about 1/24) • If detection fails ( P[long loss] x P[undetected] ) in the first packet after long loss, the error will not propagate since all headers have a CRC. Hence, the error probability is exponentially decreasing with each packet

  5. Issue 1 - Long loss detection 2(3) • Can the probability for successful long loss detection be further improved? YES!! • Pre-verifying CRC’s can guarantee detection of some loss • Choice of polynomials and calculation methods • With timers or wall clocks, long loss detection will probably always succeed

  6. Issue 1 - Long loss detection 3(3) • Compared to Keyword approach • With timers or wall clocks, the long loss detection problem is almost eliminated both for the Keyword and the CRC approach. However, the latter will still be more reliable regarding this since it has the CRC to verify with • Without timers, the CRC approach has a very low probability to not detect long loss and it prevents error propagation, while Keyword will fail completely to detect long loss

  7. Issue 2 - Residual BER 1(2) • What will be the result of residual bit errors? • Residual bit errors will usually be detected by the CRC • If several bit errors occur ( P[several bit errors] ), the CRC check may fail ( P[undetected] ) and the context may be incorrectly updated due to wrap around ( P[wrap around] ). • The combined probability for this will obviously be very lowP[several bit errors] xP[undetected] x P[wrap around] • Further, such errors will never propagate due to CRC’s in subsequent headers, they may only require a context update. • Unnecessary updates and packet discard could also be avoided with extra reconstruction attempts

  8. Issue 2 - Residual BER 2(2) • Compared to Keyword approach • Without the CRC, incorrectly decompressed packets due to residual bit errors can never be detected and discarded

  9. SO packet Robustness, KW / CRC Keyword packet N N Keyword approach CRC approach

  10. SO packet Robustness, KW / CRC Keyword packet Consecutive losses N N Keyword approach CRC approach 12-15

  11. SO packet Robustness, KW / CRC Keyword packet Consecutive losses N N Keyword approach 60-63 CRC approach 12-15

  12. SO packet Robustness, KW / CRC Keyword packet Consecutive losses N N Keyword approach N-1 CRC approach 12-15

  13. N N Keyword approach Timestamp jump CRC approach Timestamp jump SO packet Efficiency KW / CRC FO packet Keyword packet

  14. Summary • It has been shown that compared to Keywords, the CRC approach: • Can achieve both higher robustness and better compression efficiency • Better can avoid error events due to long loss (elevator case) and residual bit errors • Gives flexibility to use special decompression methods such as local recovery and reverse decompression • Therefore, as already suggested by several parties on the mailing list, the CRC approach should be used for the unidirectional and optimistic modes of operation

  15. Questions??

  16. CRC Pros&Cons • Verifies the correctness of ALL decompressed headers • Continuously updates the context, no need for explicit update packets • Makes it possible to use “external” decompressor methods • Detects residual bit errors (and undetected errors will not propagate) • Can be implemented over fixed-sized-channels that assume only 1-octet headers and payload during “normal operation” (larger packets occur only after silent periods) • Robustness against long loss can be increased with additional reconstruction attempts • FO formats can easily be designed which are common to the FO required by the reliable mode • Can fail with long loss detection. With timers or wall clocks, this problem can be completely solved. Without timers, the CRC still provides a good detection mechanism that avoids error propagation

  17. KW Pros&Cons • Can possibly be robust against longer long loss periods if the loss occur between keyword updates • Adds extra sensitive packets which reduces the overall robustness • The KW updates means that non 1-octet headers are needed even when packet stream is regular • Has no detection mechanism for residual bit errors • “External” decompressor methods can not be used since decompression can not be verified • Without timers, there is no mechanism to detect long loss which may result in undetectable error propagation • Since KW can only be sent with certain intervals, timestamp updates can not be performed at any time. This means that it may be necessary to send larger packets with increased timestamp information during long periods until the KW can be updated

More Related