darwin distributed and adaptive reputation mechanism for wireless ad hoc networks l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
DARWIN: Distributed and Adaptive Reputation Mechanism for Wireless Ad-hoc Networks PowerPoint Presentation
Download Presentation
DARWIN: Distributed and Adaptive Reputation Mechanism for Wireless Ad-hoc Networks

Loading in 2 Seconds...

play fullscreen
1 / 33

DARWIN: Distributed and Adaptive Reputation Mechanism for Wireless Ad-hoc Networks - PowerPoint PPT Presentation


  • 101 Views
  • Uploaded on

DARWIN: Distributed and Adaptive Reputation Mechanism for Wireless Ad-hoc Networks. CHEN Xiao W ei, Cheung Siu Ming CSE, CUHK May 15, 2008

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'DARWIN: Distributed and Adaptive Reputation Mechanism for Wireless Ad-hoc Networks' - cleave


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
darwin distributed and adaptive reputation mechanism for wireless ad hoc networks

DARWIN: Distributed and Adaptive Reputation Mechanism for Wireless Ad-hoc Networks

CHEN Xiao Wei, Cheung Siu Ming

CSE, CUHK

May 15, 2008

This talk is based on paper: Juan José Jaramillo and R. Srikant. DARWIN: Distributed and Adaptive Reputation Mechanism for Wireless Ad-hoc Networks. In Proc. of ACM 13th Annual International Conference on Mobile and Networking (MobiCom’07), Montreal, Canada, Sept. 2007

outline
Outline
  • Introduction
  • Basic Game Theory Concepts
  • Network Model
  • Analysis of Prior Proposals
    • Trigger Strategies
    • Tit For Tat
    • Generous Tit For Tat
  • DARWIN
    • Contrite Tit For Tat
    • Definition
    • Performance Guarantees
    • Collusion Resistance
    • Algorithm Implementation
  • Simulations
    • Settings
    • Results
  • Conclusion & Comments
introduction
Introduction
  • Source communicates with distant destinations using intermediate nodes as relays
  • Cooperation: Nodes help relay packets for each other
  • In wireless networks, nodes can be selfish users that want to maximize their own welfare.
  • Incentive mechanisms are needed to enforce cooperation.
introduction cont
Introduction (Cont.)
  • Two types of incentive mechanisms:
    • Credit exchange systems: by payment
    • Reputation based systems: by neighbor's observation
introduction cont5
Introduction (Cont.)
  • Main issue
    • Due to packet collisions and interference, sometimes cooperative nodes will be perceived as being selfish, which will trigger a retaliation situation
  • Contributions
    • Analyze prior reputation strategies’ robustness
    • Propose a new reputation strategy (DARWIN) and prove its robustness, collusion resistance and cooperation.
the prisoners dilemma game
The Prisoners’ Dilemma Game
  • A Nash equilibrium is a strategy profile having the property that no player can benefit by unilaterally deviating from its strategy
  • Repeated Prisoner’s Dilemma Game
  • Total payoff function is the discounted sum of the stage payoffs:
network model
Network Model
  • Assumptions
    • Nodes are selfish and rational, not malicious
    • Node operate in promiscuous mode
    • The value of a packet should be at least equal to the cost of the resources used to send it. (α≥1)
  • Assume any two neighbors have uniform network traffic demands. Thus, two player’s game.
  • Other Assumptions
    • Two nodes simultaneously decide whether to drop or forward their respective packets, and repeat game iteratively
    • Game time is divided into slots
payoff matrix
Payoff Matrix

Affine Transformation

payoff matrix cont
Payoff Matrix (Cont.)
  • Define pe∈(0,1) to be the probability of a packet that has been forwarded was not overheard by the originating node.
  • Define to be the perceived dropping probability of node i’s neighbor at time slot k≥0 estimated by node i.
payoff function
Payoff Function
  • is the average payoff at time slot k
  • Average discount average payoff of player i starting from time slot n is then given by
trigger strategies
Trigger Strategies

Define to be the dropping probability node i should use at time slot k according to strategy S.

  • N-step Trigger Strategy
  • If node i’s neighbor cooperates, then and then the optimal value of T=pe
  • Actually, pe is hard to perfectly estimated, so we have two cases:
    • If T<pe, cooperation will never emerge.
    • If T>pe, player –i will be perceived to be cooperative as long as it drops packets with probability
  • Full Cooperation is never the NE point with trigger strategies
tit for tat
Tit For Tat
  • Tit For Tat Strategy
  • Milan et al. proved that TFT does not provide the right incentive either for cooperation in wireless networks.
generous tit for tat
Generous Tit For Tat
  • Use a generosity factor g that allows cooperation to be restored.
  • GTFT strategy
  • GTFT is a robust strategy where no node can gain by deviating from the expected behavior, even if it cannot achieve full cooperation.
  • But according to the Corollary:

If both nodes use GTFT the cooperation is achieved on the equilibrium path if and only if g=pe

  • So GTFT also needs a perfect estimate of pe
darwin
DARWIN
  • GOAL: propose a reputation strategy that does not depend on a perfect estimation of pe to achieve full cooperation
  • FOUNDATION: “Contrite Tit For Tat” strategy in iterated Prisoners’ Dilemma
contrite tit for tat
Contrite Tit For Tat
  • Base on idea of contriteness
  • Player always in good standing on first stage
  • Player should cooperate if it is in bad standing, or if its opponent is in good standing
  • Otherwise, the player should defect
darwin16

1

-1

DARWIN

Note:

Use historic information, e.g. qi(k-1)

qi(k) acts as a measurement of bad standing

Can you find the “Contrite Tit For Tat” idea?

performance guarantees
Performance Guarantees
  • Theorem: Assume 1<γ<pe-1, DARWIN is subgame perfect if and only if
  • The problem: The exact value of pe is not known, so how do we decide γ?
  • Based on estimated pe:
performance guarantees18
Performance Guarantees
  • Estimated error probability:

pe(e) = pe + Δ where –pe<Δ<1-pe

  • Substitute into previous equation:
  • For the assumption to be true (1<γ<pe-1)
  • Precise estimate of pe is not required

Δ < 1 - pe

performance guarantees19
Performance Guarantees
  • LEMMA: If both nodes use DARWIN then cooperation is achieved on the equilibrium path. That is,
      • pi(k) = p-i(k) = 0 for all k>=0
collusion resistance
Collusion Resistance
  • Define to be the discounted average payoff of player i using strategy Si when it plays against player –i using strategy S-i
  • Define ps∈(0,1) to be the probability that a node that implements DARWIN interacts with a colluding node.
collusion resistance cont
Collusion Resistance (Cont.)
  • Then we can get the average payoff to a cooperative node
  • Similarly, the average payoff to a colluding node interacts with a node implementing DARWIN
collusion resistance cont22
Collusion Resistance (Cont.)
  • The average payoff is bounded by
  • A group of colluding nodes cannot gain from unilaterally deviating if and only if

U(S) < U(D), that is

collusion resistance cont23
Collusion Resistance (Cont.)
  • Define strategy S to be a sucker strategy if
algorithm implementation
Algorithm Implementation
  • The number of messages sent to j for forwarding
  • The number of messages j actually forwarded
  • denotes connectivity, which is the forwarding ratio
  • Then j’s average connectivity ratio is
algorithm implementation cont
Algorithm Implementation (Cont.)
  • Define
  • Use equation (6) and (7) to find the dropping probability
  • To meet

We estimate pe as , which is the fraction of time at least one node different from j transmits

simulations
Simulations
  • Settings
    • ns-2
    • Dynamic Source Routing (DSR) Protocol
    • Area: 670 x 670m2
    • 50 nodes randomly placed, some are selfish
    • 14 source-destination pairs
    • packet size is 512 byte
    • Simulation time is 800s, time slot is 60s
    • γ=2
simulations27
Simulations
  • Normalized forwarding ratio
    • Fraction of forwarded packets in the network under consideration divided by fraction of forwarded packets in a network with no selfish nodes
  • Objective
    • Find how normalized forwarding ratios for both cooperative and selfish nodes vary with:
    • Dropping probability of selfish nodes
    • Source rate
    • Percentage of selfish nodes
simulations28
Simulations
  • Normalized forwarding ratio for different dropping ratio of selfish nodes

5 selfish nodes

2 packets/s

simulations29
Simulations
  • Normalized forwarding ratio for different source rates

5 selfish nodes

100% dropping ratio for selfish nodes

simulations30
Simulations
  • Normalized forwarding ratio for different number of selfish nodes

2 packets/s

100% dropping ratio for selfish nodes

Key pt.: Selfishness does not improve performance

Nodes are rational

conclusion
Conclusion
  • Studied how reputation-based mechanisms help cooperation emerge among selfish users
    • Showed properties of previously proposed schemes
    • Proposed new mechanism called DARWIN
  • DARWIN is
    • Robust to imperfect measurements (pe)
    • Collusion-resistant
    • Able to achieve full cooperation (LEMMA)
    • Insensitive to parameter choices
comments
Comments
  • Contribution: Apply CTFT to Wireless Ad-hoc Networks
  • Reliable as long as assumptions hold
  • Assumed nodes do not lie about perceived dropping probability
    • Liars can get better payoffs!
  • Assumed nodes are rational
  • Only the previous stage is considered
  • Normalized forward rates, but not payoff, is shown in simulation results.