Using Feedback in MANETs: a Control Perspective
This presentation is the property of its rightful owner.
Sponsored Links
1 / 28

101 PowerPoint PPT Presentation


  • 76 Views
  • Uploaded on
  • Presentation posted in: General

Using Feedback in MANETs: a Control Perspective. 101. 111. Todd P. Coleman [email protected] University of Illinois DARPA ITMANET. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A. Current Uses of Feedback. Theory

Download Presentation

101

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


101

Using Feedback in MANETs: a Control Perspective

101

111

Todd P. Coleman

[email protected]

University of Illinois

DARPA ITMANET

TexPoint fonts used in EMF.

Read the TexPoint manual before you delete this box.: AAAAAAAA


101

Current Uses of Feedback

  • Theory

  • Feedback modeled noiseless

  • Point-to-point: capacity unchanged

    • Significantly improved error exponents

    • Reduction in complexity

  • MANETs: Enlargement of capacity region


101

Current Uses of Feedback

  • Practice

  • Feedback is noisy, used primarily for

    • Robustness to channel uncertainty

    • Estimation of channel parameters

    • ARQ-style communication w/ erasures


101

Current Uses of Feedback

  • Practice

  • Feedback is noisy, used primarily for

    • Robustness to channel uncertainty

    • Estimation of channel parameters

    • ARQ-style communication w/ erasures

    • But: Burnashev-style “forward error correction+ARQ” schemes are extremely fragile w/ noisy feedback (Kim, Lapidoth, Weissman 07)


101

Applicability of Feedback in MANETs

101

111

  • Instantiate network feedback control algorithms for MANETs

  • Develop iterative practical schemes for noisy feedback?

  • Coding w/ feedback over statistically unknown channels?

  • Develop fundamental limits of error exponents with feedback w/ fixed block length


101

Communication w/ Noiseless Feedback

0

0.25

0.50

0.75

1.00

00

01

10

11

0

1


101

Communication w/ Noiseless Feedback

0

0.25

0.50

0.75

1.00

00

01

10

11

0

1

Given an encoder’s Tx strategy,

decoding is almost trivial (Baye’s rule)


101

Communication w/ Noiseless Feedback

0

0.25

0.50

0.75

1.00

00

01

10

11

0

1

Given an encoder’s Tx strategy,

decoding is almost trivial (Baye’s rule)

How do we select a (recursive) encoder strategy for an arbitrary memoryless channel?


101

A Control Interpretation of the Dynamics of the Posterior

Coleman ’09: “A Stochastic Control Approach to ‘Posterior Matching’-style Feedback Communication Schemes”


101

A Control Interpretation of the Dynamics of the Posterior

Coleman ’09: “A Stochastic Control Approach to ‘Posterior Matching’-style Feedback Communication Schemes”


101

A Control Interpretation of the Dynamics of the Posterior

Coleman ’09: “A Stochastic Control Viewpoint on ‘Posterior Matching’-style Feedback Communication Schemes”

uk

Fk

P(Fk|Fk-1, uk)

reference signal

Controller

Fk-1

Z-1

Fw*


101

Stochastic Control: Reward

Coleman ’09

Fw*

Fk+1

D(Fw*||Fk+1)

Reward at any stage k is the reduction in “distance” to target

Xk

Fk

D(Fw*||Fk)


101

Maximum Long-Term Average Reward

Coleman ’09


101

Maximum Long-Term Average Reward

Coleman ’09

  • (1),(2) hold w/ equality if:

  • a) Y’s all independent

  • b) Each Xi drawn according to P*(x)


101

Maximum Long-Term Average Reward

Coleman ’09

  • (1),(2) hold w/ equality if:

  • a) Y’s all independent

  • b) Each Xi drawn according to P*(x)

  • Horstein ’63 (BSC)

  • Schalwijk-Kailath ’66 (AWGN)

  • Shayevitz-Feder ‘07, ‘08 (DMC)


101

The Posterior Matching Scheme: an Optimal Solution

Coleman ’09

  • Next input indepof everything decoder has seen so far, withcapacity-achieving marginal distribution

  • No forward error correction. Adapt on the fly.

Posterior matching scheme


101

The Posterior Matching Scheme: an Optimal Solution

Coleman ’09

  • Next input indepof everything decoder has seen so far, withcapacity-achieving marginal distribution

  • No forward error correction. Adapt on the fly.

Posterior matching scheme


101

Implications for Demonstrating Achievable Rates

Coleman ’09

0

1

0

1

1

0

1

0

1


101

Coleman ’09

Lyapunov Function

0

1

Posterior matching scheme:

0

1


101

Lyapunov Function (cont’d)

Coleman ’09

0

1

1

0

1

0

1


101

Information

Theory

Control

Theory

Symbiotic Relationship

Coleman ’09: “A Stochastic Control Viewpoint on ‘Posterior Matching’-style Feedback Communication Schemes”

Converse Thms Give

Upper Bounds on

Average Long-Term Rewards for Stochastic

Control Problem


101

Information

Theory

Control

Theory

Symbiotic Relationship

Coleman ’09: “A Stochastic Control Viewpoint on ‘Posterior Matching’-style Feedback Communication Schemes”

Converse Thms Give

Upper Bounds on

Average Long-Term Rewards for Stochastic

Control Problem

KL Divergence Lyapunov functions guarantee all rates achievable


101

Research Results with This Methodology

  • Interpret feedback communication encoder design as stochastic control of posterior towards certainty

  • Converse theorems specify fundamental performance bounds on a stochastic control problem related to controlling posterior.

  • An optimal policy implies the existence of a Lyapunov function, which is in essence a KL divergence

  • Lyapunov function directly implies achievability for all R < C

Coleman ’09


101

Research Results with This Methodology

  • Interpret feedback communication encoder design as stochastic control of posterior towards certainty

  • Converse theorems specify fundamental performance bounds on a stochastic control problem related to controlling posterior.

  • An optimal policy implies the existence of a Lyapunov function, which is in essence a KL divergence

  • Lyapunov function directly implies achievability for all R < C

Coleman ’09

Gorantla and Coleman ‘09:

Encoders that achieve El Gamal 78: “Physically degraded broadcast channels w/ feedback“ capacity region in an iterative fashion w/ low complexity


101

Information

Theory

Control

Theory

New Important Directions this Approach Enables

101

111

  • Develop iterative low-complexity encoders/decoders for noisyfeedback? Partially Observed Markov Decision Process


101

Information

Theory

Control

Theory

New Important Directions this Approach Enables

101

111

  • Develop iterative low-complexity encoders/decoders for noisyfeedback? Partially Observed Markov Decision Process

  • Optimal coding w/ feedback over statistically unknown channels?Reinforcement learning from control literature


101

Information

Theory

Control

Theory

New Important Directions this Approach Enables

101

111

  • Develop iterative low-complexity encoders/decoders for noisyfeedback? Partially Observed Markov Decision Process

  • Optimal coding w/ feedback over statistically unknown channels?Reinforcement learning from control literature

  • Develop fundamental limits of error exponentswith feedback w/ fixed block length Lyapunov function enables a fundamental Martingale condition


101

Information

Theory

Control

Theory

New Important Directions this Approach Enables

101

111

  • Develop iterative low-complexity encoders/decoders for noisyfeedback? Partially Observed Markov Decision Process

  • Optimal coding w/ feedback over statistically unknown channels?Reinforcement learning from control literature

  • Develop fundamental limits of error exponents with feedback w/ fixed block length Lyapunov function enables a fundamental Martingale condition

  • Also:stochastic control approach provides a rubric to check tightness of converses via structure of optimal solution


  • Login