1 / 34

Correcting Errors in MLCs with Bit-fixing Coding

Correcting Errors in MLCs with Bit-fixing Coding. Yue Li joint work with Anxiao (Andrew) Jiang and Jehoshua Bruck. Introduction. Flash memories have excellent performance. Cells are programmed by injecting electrons. Programming is fast. Erasure is expensive.

vernon
Download Presentation

Correcting Errors in MLCs with Bit-fixing Coding

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Correcting Errors in MLCs with Bit-fixing Coding Yue Li joint work with Anxiao (Andrew) Jiang and JehoshuaBruck

  2. Introduction • Flash memories have excellent performance. • Cells are programmed by injecting electrons. • Programming is fast. • Erasure is expensive. • The growth of storage density brings reliability issues. • Cells are more closely aligned • More levels are programmed into each cell. • We propose a new coding scheme for correcting asymmetric errors in multi-level cells (MLCs).

  3. Motivation: Cell Overprogramming Voltage Reprogram Correct with ECC Number of Pulses

  4. 11 10 9 8 8 7 7 6 6 5 5 5 5 4 4 4 4 3 3 3 3 3 2 2 2 2 2 2 1 1 1 1 1 1 0 0 0 0 0 0 0 8 2 5 0 3 5 11

  5. Asymmetric errors appear… 11 10 10 9 9 8 8 8 7 7 7 6 6 6 5 5 5 5 5 4 4 4 4 4 ? ? 3 3 3 3 3 3 How to correct asymmetric errors How to minimize redundancy in ECC 2 2 2 2 2 2 1 1 1 1 1 1 1 0 0 0 0 0 0 0 8 2 5 0 3 5 11 1 8 10 5

  6. Gray Codes Binary Codes Our Codes 1111 1000 15 The number of bit errors equals the Hamming weight of the binary representation of a cell error. 14 1110 1001 1101 13 1011 1100 12 1010 1011 1110 11 1010 10 1111 Cell error # of bit errors 1001 1101 9 1 1 8 1000 1100 1 2 0111 0100 7 2 3 6 0110 0101 1 4 0101 5 0111 2 5 0100 4 0110 2 6 0011 3 0010 0 3 7 2 0010 0011 1 8 1 0001 0001 0000 0000 The number of bit errors depends on the size of the error and the cell level The number of bit errors depends only on the size of the cell error.

  7. Codes Correcting Asymmetric Errors • This work generalizes the code constructions in: [1] R. Ahlswede, H. Aydinian and L. Khachatrian, "Unidirectional error control codes andrelated combinational problems," in Proc. Eighth Int. Workshop Algebr. Combin. Coding Theory, pp. 6-9, 2002. [2] Y. Cassuto, M. Schwartz, V. Bohossian and J. Bruck, "Codes for asymmetric limited-magnitude errors with application to multilevel flash memories," in IEEE Trans. Information Theory, vol. 56, no. 4, pp. 1582-95, 2010. [3] E. Yaakobi, P. H. Siegel, A. Vardy, and J. K. Wolf, “On Codes that Correct Asymmetric Errors with Graded Magnitude Distribution,” in Proc. IEEE International Symposium on Information Theory, pp. 1021-1025, 2011.

  8. Example of Bit-fixing Coding 11 10 9 8 7 7 6 6 5 5 5 5 4 4 4 4 3 3 3 3 3 2 2 2 2 2 2 1 1 1 1 1 1 0 0 0 0 0 0 0 11 7 2 5 0 3 5

  9. Bit-wise ECC 7 2 5 0 3 5 11 ECC ECC ECC ECC

  10. Assume asymmetric errors appear… 11 10 9 8 8 8 7 7 7 6 6 6 5 5 5 5 4 4 4 4 4 4 3 3 3 3 3 3 2 2 2 2 2 2 1 1 1 1 1 1 0 0 0 0 0 0 0 7+1 2+2 5 0 3+1 5+3 11

  11. Binary Codes: there are 12 bit errors. 7+1 2+2 5 0 3+1 5+3 11 Gray Codes: there are 7 bit errors. Bit-fixing code: there are only 5 bit errors! It alternatively corrects bits and cell levels.

  12. Decoding round 1: Correct bits 7+1 2+2 5 0 3+1 5+3 11 D C B A corrected 3 bit errors

  13. Decoding round 1: Correct cell levels (7+1)– 2^0 = 7 (3+1)– 2^0 = 3 (5+3)– 2^0 = 7 7 2+2 5 0 3 5+2 11 D C B A Value of error Bit index An error in the LSB = a magnitude-1 error on level 1st bit = a magnitude-2 error 2nd bit = a magnitude-4 error MSB = a magnitude-8 error Updating rule: new level = current level – e * 2^i corrected 3 bit errors

  14. Decoding round 1: Correct cell levels (7+1)– 2^0 = 7 (3+1)– 2^0 = 3 (5+3)– 2^0 = 7 7 2+2 5 0 3 5+2 11 D C B A Value of error Bit index Updating rule: new level = current level – e * 2^i corrected 3 bit errors

  15. Corrected Data • Initial Data • Receive Errors • Correct Bits • Correct Cells 7 7 7+1 7+1 (7+1)– 2^0 = 7

  16. Decoding round 2: Correct bits 7 2+2 5 0 3 5+2 11 D C A B corrected 3 + 2 bit errors

  17. Decoding round 2: Correct cell levels (5+2)– 2^1 = 5 (2+2)– 2^1 = 2 7 2 5 0 3 5 11 D B C A Updating rule: new level = current level – e * 2^i corrected 3 + 2 bit errors

  18. Correcting asymmetric errors • Let the total number of cell levels be . Assume the magnitudes of n errors are , where for • The total number of errors need to be corrected is • Let , the total number of iterations is Computes binary representation Computes Hamming weight

  19. 15 14 13 12 11 10 9 A downward error with magnitude e (less than 0) 8 7 6 5 4 An upward error with magnitude (e mod q) 3 2 1 0

  20. Correcting bidirectional errors • Let the total number of cell levels be . Assume the magnitudes of errors are , where for . • The total number of errors need to be corrected is • Let , the total # of iterations is Updating rule: new level = (current level – e * 2^i) mod q

  21. Suppose errors are bidirectional 10 9 8 8 8 7 7 7 6 6 6 5 5 5 5 4 4 4 4 3 3 3 3 3 2 2 2 2 2 2 1 1 1 1 1 1 0 0 0 0 0 0 0 7+1 2 5-2 0 3 5+3 11-1

  22. Decoding round 1: Correct bits 7+1 2 5-2 0 3 5+3 11-1 B A C D corrected 3 bit errors

  23. Decoding round 1: Correct cell levels 7 2 5-2 0 3 5+2 11-2 (7+1)– 2^0 = 7 (11-1)– 2^0 = 9 (5+3)– 2^0 = 7 B A C D Updating rule: new level = (current level – e * 2^i) mod q corrected 3 bit errors Total number of cell levels

  24. Decoding round 2: Correct bits 7 2 5-2 0 3 5+2 11-2 A B C D corrected 3 + 3 bit errors

  25. Decoding round 2: Correct cell levels 7 2 0 3 5 (5-2)– 2^1 = 1 (5+2)– 2^0 = 5 (11-2)– 2^1 = 7 5-2-2 11-2-2 A B C D Updating rule: new level = (current level – e * 2^i) mod q corrected 3 + 3 bit errors

  26. Decoding round 3: Correct bits 7 2 0 3 5 11-2-2 5-2-2 B A C D corrected 3 + 3 +2 bit errors

  27. Decoding round 3: Correct cell levels 7 2 0 3 5 (11-2-2)– 2^2 = 3 ((5-2-2)– 2^2) mod 16 = 13 5-2-2-4 11-2-2-4 A B C D Updating rule: new level = (current level – e * 2^i) mod q corrected 3 + 3 +2 bit errors

  28. Decoding round 4: Correct bits 7 2 5-2-2-4 0 3 5 11-2-2-4 A B C D corrected 3 + 3 + 2 + 2 bit errors

  29. Decoding round 4: Correct cell levels 7 2 5-2-2-4-8 0 3 5 11-2-2-4-8 ((11-2-2-4)– 2^3) mod 16 = 11 ((5-2-2-4)– 2^3) mod 16 = 5 A B C D Updating rule: new level = (current level – e * 2^i) mod q corrected 3 + 3 + 2 + 2 bit errors

  30. Redundancy Allocation • Separate • Encoding • Compute Cell Levels • Program • Cells Encoding ECC ECC ECC ECC ECC

  31. Evaluation • Compare the rates with those of • Binary codes • Gray codes • ALM-ECC with hard decoding • An error of any magnitude is seen as a Hamming error in the inner code. • ALM-ECC with soft decoding • The error distribution is used for optimal decoding. (But the inner ECC is not binary if the maximum magnitude of errors is more than 1). Y. Cassuto, M. Schwartz, V. Bohossian and J. Bruck, "Codes for asymmetric limited-magnitude errors with application to multilevel flash memories," in IEEE Trans. Information Theory, vol. 56, no. 4, pp. 1582-95, 2010.

  32. Error Model • Cell levels: 16 • Maximum magnitude of asymmetric errors: 3 • Error distribution mimics a Gaussian distribution. • For , the probability of an error of size equals .

  33. Conclusions • Effective • Good for correcting asymmetric errors. • Efficient decoding and encoding methods. • Flexible • Works with existing ECCs. • Naturally handles bidirectional errors. • General • Supports arbitrary error distribution. • Supports arbitrary numeral systems. A. Jiang, Y. Li and J. Bruck, ”Bit-fixing codes for multi-level cells," in Proc. IEEE Information Theory Workshop (ITW), 2012.

More Related