1 / 18

Network RS Codes for Efficient Network Adversary Localization

Network RS Codes for Efficient Network Adversary Localization. Hongyi Yao. Sidharth Jaggi Minghua Chen. Disease Localization. Heart. 2. Network Adversary Localization. 001001.

jenis
Download Presentation

Network RS Codes for Efficient Network Adversary Localization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network RS Codes for Efficient Network Adversary Localization Hongyi Yao Sidharth Jaggi Minghua Chen

  2. Disease Localization Heart 2

  3. Network Adversary Localization 001001 • Adversarial errors: The corrupted packets are carefully chosen by the enemies for specific reasons. • Our object: Locating network adversaries. 3

  4. Network coding S • Network coding suffices to achieve to the optimal throughput for multicast[RNSY00]. • Random linear network coding suffices, in addition to its distributed feature and low design complexity[TMJMD03]. m1 m2 m1 m2 am1+bm2 m1+m2 m1 m2 r1 r2 5

  5. back e1 x x x x x=2 . 3+2 2 e1 e3 Network Coding Aids Localization • Random Network coding (RLNC): x(e3)=x(e1)+2x(e2), x(e4)=x(e1)+x(e2). • Routing scheme is used by u:x(e3)=x(e1), x(e4)=x(e2). Probe messages: M=[1, 2] e1 e3 3 1 3 2x 7 3 x YE=[3, 2] YM=[1,2] E=YE-YM=[2,0] YE=[7, 5] YM=[5,3] E=YE-YM=[2,2] s r 2 2 u 2 2 x 5 0 x[1,1] x[2,1] x[0,1] x[1,0] 3+2 e2 e4 • Network coding scheme is enough for r to locate adversarial edge e1. • Routing scheme is not enough for r to locate adversarial edge e1. 7

  6. RLNC for Adversary Localization [YSM10] • Desired features of RLNC • Distributed Implementation. • Achieving communication capacity. • Locate maximum number of adversaries. 8

  7. RLNC for Adversary Localization [YSM10] • Drawbacks of RLNC • Require topology information. • Locating adversaries is a computational hard problem. 8

  8. Our contribution: Network Reed-Solomon Codes • Network RS Codes preserves all the desired features of RLNC. • Distributed Implementation. • Achieving communication capacity. • Locate maximum number of adversaries. • Furthermore, Network RS Codes • Do not require topology information. • Locate network adversaries efficiently. 8

  9. Concept: IRV 0 0 Edge Impulse Response Vector (IRV): The linear transform from the edge to the receiver. UsingIRVswe and locate failures. 1 [2 9 6] e1 [0 3 2] 3 1 2 e3 3 1 3 1 1. Relation between IRVs and network structure: 2 3 4 2 1 3 9 IRV(e1) is in the linear space spanned by IRV(e2) and IRV(e3). [1 0 0] 6 2 e2 2 1 0 9 6 0 2. Unique mapping from edge to IRV: Two independent edges can have independentIRVs. 9

  10. Adversary Localization by IRV • Using network error correction codes [JLKHKM07], error vector E can be decoded at the receiver. • Error E is in fact a linear combination of IRVs={IRV(e1), IRV(e2),…,IRV(em)}. That is • E=c1 IRV(e1) + c2 IRV(e2) + … + cm IRV(em). • In particular, only the IRVs of adversarial edges has nonzero coefficients to E.

  11. Adversary Localization by IRV • Without loss of generality, assume e1, e1, …, ez are the adversarial edges. • Thus, E=c1 IRV(e1) + c2 IRV(e2) + … + cz IRV(ez). • The adversarial edge number z is much smaller than the total edge number m. • Therefore, locating adversaries is mathematically equivalent with sparsely decomposing E into IRVs.

  12. Why RLNC is not good? • Locating adversaries is mathematically equivalent with sparsely decomposing E into IRVs. • For RLNC, IRVs are sensitive to network topology… • For RLNC, IRVs are randomized chosen. Sparse decomposition into randomized vectors are hard [V97].

  13. Key idea of Network RS Codes • Motivated by classical Reed Solomon (RS) codes [MS77]. • We want the IRV of ei to be its RS IRV IRV’(ei), which is a randomly chosed column of RS parity check matrix. Parity Check Matrix H of a RS code.

  14. Nice properties of RS parity check matrix H • Assume E is a sparse linear combination of the columns of H. • We can decompose E into sparse columns of H in a computational efficient manner. • Thus, if all edge IRVs equal their RS IRVs, we can locate network adversaries efficiently.

  15. To achieve RS IRVs • Each node, say u, performs local coding as follows. • Node u assume e1 and e2 have RS IRVs, i.e., IRV(e1)=IRV’(e1) and IRV(e2)=IRV’(e2). • Recall that the IRV of e3 is in the span of IRV(e1) and IRV(e2). • Node u chooses the coding coefficients such that IRV(e3)=IRV’(e3). e3 u e1 e2

  16. To achieve RS IRVs • Surprisingly, previous local node scheme guarantees the desired global performance: each user’s IRV equals the corresponding RS IRV. • Distributed Implementation. • No topology information is needed.

  17. Summary of our contribution

  18. Network Coding Tomography forNetwork Failures • Thanks! • Questions? Details in: Hongyi Yao and Sidharth Jaggi and Minghua Chen, Passive network tomography: A network coding approach, under submission to IEEE Trans. on Information Theory, and arxiv: 0908-0711 14

More Related