1 / 37

Ulam’s Game and Universal Communications Using Feedback

This paper discusses Ulam's Game, a method of separating objects using questions and lies. It explores the game as a problem of reliable communication and defines communication rate and channel capacity. It presents a constructive and variable rate transmission scheme to achieve optimal rate without knowing the fraction of lies in advance.

Download Presentation

Ulam’s Game and Universal Communications Using Feedback

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ulam’s Game and Universal Communications Using Feedback Ofer Shayevitz June 2006

  2. Introduction to Ulam’s Game • Are you familiar with this game? • How many y/n questions are needed to separate 1000 objects? • M objects  log2(M) questions

  3. What Happens When We Lie? • Separate two objects - One lie allowed • Precisely three questions are required ! • Separate M objects – One lie allowed • 2log2(M) + 1 questions are sufficient! • But we can do better… • It was shown [Pelc’87] that the minimal # of questions is the least positive integer n satisfying • M objects, L lies – Very Difficult !

  4. Feedback Channel Forward Channel Alice (Transmitter) Bob (Receiver) Charlie (Adversary) Ulam’s Game as a Problem of Reliable Communications

  5. Communication Rate Defined • Alice transmits one of M possible messages by saying yes/no = 1 bit • M messages  log2(M) bits • The channel can be used n times (seconds) • Charlie can lie a fraction pof the time  no more than np lies (errors) • Define the communication rate R

  6. Channel Capacity Defined • A (M,n) transmission scheme an agreed procedure of questions/answers between Alice and Bob • A reliable scheme  After n seconds the message is correctly decoded by Bob • If for any n there is a (M,n) reliable scheme with rate R  we say R is Achievable • CapacityC(p) Maximal achievable rate • C(0) = ?

  7. Capacity Behavior • Claim: Two messages can always be correctly decoded for p < ½ • Proof: • Message is S {1,2} • Alice says: • Yes n times for S=1 • No n times for S=2 • How will Bob decode? • Using a Majority Rule  Always correct • Rate for two messages • Corollary: Can transmit with Rate zerofor p < ½ (even without feedback…)

  8. Capacity Behavior • Claim: C(p)= 0 for p ≥ ⅓. • Proof: No reliable three messages scheme exists  Rate > 0 is not achievable • Assume p = ⅓, n = 3E+1 seconds • Message is S {1,2,3} • General strategy: Ask if S=1,2 or 3 • Bob Counts “negative votes” against possible messages • S has votes as the number of lies • Optimal Decision: Bob Chooses message with least votes (why?) • Success: Only S has E (~ ⅓n) votes or less (why?)

  9. Capacity Behavior – Cont. • Charlie’s strategy: Cause two messages to have E votes or less • First – Vote against the single message • When a message accumulates E +1votes it is “out of the race” • If not - all messages have E votes or less… • Now – always vote against the message with the least votes • Result: Charlie Always votes against only one competitive message

  10. Capacity Behavior – Cont. • Total # of votes against competitive messages: • Before the 3rd message was “out” both competitive messages had no more than E votes • After That, they are “balanced” and their sum cannot exceed 2E • Conclusion: Both messages have no more than E votes each  Cannot separate them ! QED

  11. Capacity Bounds [Berlekamp’64] The Entropy Function:

  12. Our Result

  13. When fraction of lies is unknown in advance, Capacity is zero classicallyBut we can get a positive Rate!

  14. Result’s Properties • No need to know fraction of lies (errors) in advance • Constructive – A specific transmission scheme is introduced • Variable Rate – Better channel, higher Rate • Attains optimal Rate (not elaborated) • Penalty – Negligible error probability, goes to zero with increasing n • Key Idea – Randomizationto mislead Charlie

  15. Taking a Hard Turn…

  16. Message Point Representation • A message is a bit-stream b1,b2,b3,…. • Can also be represented by a point • Start with the Unit Interval[0,1) • If b1=0 take [0,½) , Otherwise take [½,1) • Assume b1=0: • If b2=0 take [0, ¼) • Otherwise take [¼,½) • The finite bit-stream b1,b2,b3,…,bk is represented by a binary interval of length 2-k • The infinite bit-stream is represented by a messagepoint ω = 0. b1b2b3….

  17. Transmission of a Message Point • First assume no lies (errors) • Message point can be any point in [0,1) • Assume ω < ½ Alice transmits a zero • Otherwise, transmits a one • Now Bob knows ωresides in [0,½) • If ω is in [0, ¼) transmit another zero • If ω is in [¼,½) transmit a one • In fact, Alice transmits the message bits…

  18. Now with Lies… • Let p be the precise fraction of lies • Assumption I: we know p(and also p < ½) • If ω < ½ Alice transmits a zero • Otherwise, transmits a one • Bob thinks ω is “more likely” to be in [0,½), but [½,1) is also possible… • How can that notion be quantified ? • What should Alice transmit next?

  19. Message Point Density • We define a density function over the unit interval • The density function describes our level of confidence (at time k) of the various possible message point positions • We require for all k • Alice steers Bob in the direction of ω • Bob gradually zooms in on ω • Based on a scheme for a different setting by [Horstein’63]

  20. Start with a uniform density a0 is the median point of

  21. - Density given the received bit a1 is the median point of

  22. - Density given the two received bits a2 is the median point of

  23. - Density given the three received bits a3 is the median point of

  24. Hopefully after a long time…

  25. Things to be noted… • After k iterations  k+1 intervals within each is constant • ωlies in one of them, the message interval • . is multiplied by 2pif an error occurred at time k • Multiplied by 2(1-p)otherwise • There are exactly np errors, therefore:

  26. Another Assumption • We Assumed we know p (Assumption I) • Assumption II – Bob knows the message interval when transmission ends… • These assumptions will be later removed • If the message interval size is 2-L then:

  27. Transmission Rate • Message interval size 2-L  bits can be decoded • The bit Rate is at least which tends to as required

  28. Assumption I - Removed • p is unknown • But Alice knows p at the end ! • Idea – Use an estimate for p, based on what Alice observed so far • Define a noise sequence • A reasonable estimate is the noise sequence’s empirical probability : • Bias needed for uniform convergence

  29. This probability estimation is the KT estimate [KrichvskyTrofimov’81] • Using the KT estimate we get • By KT estimate properties we get • Which results in Rate • So asymptotically, we loose nothing !

  30. Assumption I* Added… • We made an absurd assumption here – Did you notice? • Bob (receiver) must know as well ! • Equivalent to knowing the noise sequence… • Assumption I*: can be updated once per B seconds (still needs explaining..) • B=B(n) is called the block size, may depend on n • It can be shown that • So we require

  31. Update Information (UI) • Assume seconds  • UI elements: • # of ones in the noise sequence in the last block  options  bits • Current message interval  options  bits • Must provide Bob with UI once per block • UI is about bits per seconds • Therefore, UI Rate is (key point!!)

  32. IF Alice can reliably convey UI to Bob thenWe are done!

  33. Reliable UI – Is That Possible? • Old Problem: Charlie may corrupt UI… • Different from the original problem? • Yes - UI Rate approaches zero ! • Remember, Rate zero can be attained for p < ½ ! • Solution’s Outline: • Random positions per block are agreed via feedback • Bob Estimates if p < ½ or p >½ in each block: • Alice transmits “all zeros” over random positions • Bob finds fraction of ones received • Alice transmits UI over random positions per block • Alice repeats each UI bit several times • Bob decodes each bit by majority/minority rule • “Bad blocks” (p ~ ½) are thrown away

  34. Reliable UI – Cont. • Penalty: Bad estimate  Error ! • Can show that error probability tends to zero • Throwing “Bad blocks”  Random Rate • Probability of throwing a good block is small • Rate approaching is attained with probability

  35. Summary • Ulam’s game introduced • Analogy to communications with adversary and feedback • Classical results presented • Can do much better with randomization! • Higher Rate • Rate Adaptive to channel (Charlie) behavior • Penalty – Vanishing error probability

  36. Further Results • Much higher Rates possible using structure in the noise sequence (Charlie’s strategy) • Example: Assume Charlie lies and tells the truth alternately • so our scheme attains Rate zero • But Alice can notice this “stupid” strategy ! • Alice can lie in purpose to “cancel “ Charlie’s lies • Related to universal prediction and universal compression (Lempel-Ziv) of individual sequences • Generalizations to multiple-choice questions

  37. Thank You!

More Related