lecture 5 topology control
Download
Skip this Video
Download Presentation
Lecture 5: Topology Control

Loading in 2 Seconds...

play fullscreen
1 / 62

Lecture 5: Topology Control - PowerPoint PPT Presentation


  • 313 Views
  • Uploaded on

Lecture 5: Topology Control. Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Paolo Santi and Alberto Cerpa . Problems affected by link quality. Topology Control Neighborhood Management Routing Time Synchronization Aggregation

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Lecture 5: Topology Control' - niveditha


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
lecture 5 topology control

Lecture 5: Topology Control

Anish Arora

CIS788.11J

Introduction to Wireless Sensor Networks

Material uses slides from Paolo Santi and Alberto Cerpa

problems affected by link quality
Problems affected by link quality
  • Topology Control
  • Neighborhood Management
  • Routing
  • Time Synchronization
  • Aggregation
  • Application Management
references
References
  • Topology Control tutorial, Mobihoc’04, Paolo Santi
  • SPAN, Benjie Chen, Kyle Jamieson, Robert Morris, Hari Balakrishnan, MIT
  • GAF/CEC, Y. Xu, S. Bien, Y. Mori, J . Heidemann & D. Estrin, USC/ISI – UCLA
  • ASCENT, Alberto Cerpa and Deborah Estrin, UCLA
  • GS3: Scalable Self-configuration and Self-healing in Wireless Networks, PODC 2002, Hongwei Zhang, Anish Arora
  • M. Demirbas, A.Arora, V.Mittal, FLOC: A Fast Local Clustering Service for Wireless Sensor NetworksDIWANS 2004
slide4

Why Control Communications Topology

  • High density deployment is common
      • Even with minimal sensor coverage, we get a high density communication network (radio range > typical sensor range)
  • Energy constraints
      • When not easily replenished
  • Power usage
      • Observation: radios consume about the same power in idle state than Tx and Rx state
      • Chicken & egg problem: to save energy, radios must be turned off (not simply reduce packet transmissions); but if radios are turned off, nodes cannot receive messages
problem statement s
Problem Statement(s)
  • Find an MCDS, i.e. a minimum subset of nodes that is both:
    • Set cover
    • Connected
  • Choose a transmit-power level whereby network is connected
    • per node or same for all nodes
    • with per node there is the issue of avoiding asymmetric links
    • cone-based algorithm:
      • node u transmits with the minimum power ρu s.t. there is at least one neighbor in every cone of angle x centered at u
    • k-neighbors algorithm:
      • each node chooses nearest k neighbors for its subgraph
      • k is chosen s.t. the graph generated is connected w.h.p.
problem statement s1
Problem Statement(s)
  • Find a minimum subset of nodes that provides some density
    • in each geographic region  connectivity
    • we’ll look at the examples of GAF, SPAN, GS3, ASCENT
  • Given a connected graph G, find a subgraph G’ which can route messages between nodes in energy-efficient way
    • both unicast and broadcast spanners
    • reduces interference as well

Sub-problems:

    • Prune asymmetric links
    • Tolerate node perturbations
    • Load balance
where should tc be positioned in the protocol stack
Where should TC be positioned in the protocol stack?

No clear answer in the literature

One view:

Routing Layer

TC Layer

MAC Layer

Routing protocol may trigger TC execution (to get better routes)

  • Routing (structure) involves only active nodes

MAC protocol may trigger TC execution (if neighborhood changes)

  • TC controls coarse-grain duty-cycling, MAC controls fine-grain
  • Mode changes need to be coordinated to avoid conflicts
assumptions radio mac
Assumptions: Radio/MAC
  • Circular or Isotropic Models: GS3
  • Grid-based connectivity: GAF, GS3
  • Radio/MAC dependencies:
    • 802.11 Power Saving mode: Span
    • Promiscuous mode: ASCENT, CEC
assumptions neighbor information
Assumptions: Neighbor Information
  • Locality:
    • 1-hop neighbor: GS3, ASCENT
    • n-hop neighbor (with various n > 1): GAF, CEC, Span …
    • Dependency on routing: GS3, Span
  • Measurement-based: ASCENT, CEC
properties reactivity to dynamics load balancing
Properties: Reactivity to dynamics & load balancing
  • Local re-calculation of state: GS3
  • Global re-calculation of state: Span
  • Local recovery: GS3, GAF, CEC, ASCENT
  • Explicit load balancing mechanisms: GS3, Span, GAF, CEC
slide11
SPAN
  • Goal: preserve fairness and capacity & still provide energy savings
  • SPAN elects “coordinators” from all nodes to create backbone topology
  • Other nodes remain in power-saving mode and periodically check if they should become coordinators
  • Tries to minimize # of coordinators while preserving network capacity
  • Depends on an ad-hoc routing protocol to get list of neighbors & the connectivity matrix between them
  • Runs above the MAC layer and “alongside” routing
coordinator election announcement
Coordinator Election & Announcement
  • Rule: if 2 neighbors of a non-coordinator node cannot reach each other (either directly or via 1 or 2 coordinators), node becomes coordinator
  • Announcement contention is resolved by delaying coordinator announcements with a randomized backoff delay
  • delay = ((1 – Er/Em) + (1 – Ci/(Ni pairs)) + R)*Ni*T

Er/Em: energy remaining/max energy

Ni: number of neighbors for node i

Ci: number of connected nodes through node i

R: Random[0,1]

T: RTT for small packet over wireless link

coordinator withdrawal
Coordinator Withdrawal
  • Each coordinator periodically checks if it should withdraw as a coordinator
  • A node withdraws as coordinator if each pair of its neighbors can reach each other directly of via some other coordinators
  • To ensure fairness, after a node has been a coordinator for some period of time, it withdraws if every pair of nodes can reach each other through other neighbors (even if they are not coordinators)
  • After sending a withdraw message, the old coordinator remains active for a “grace period” to avoid routing loses until the new coordinator is elected
gaf cec geographical adaptive fidelity
GAF/CEC: Geographical Adaptive Fidelity
  • Each node uses location information (provided by some orthogonal mechanism) to associate itself to a virtual grid
  • All nodes in a virtual grid must be able to communicate to all nodes in an adjacent grid
  • Assumes a deterministic radio range, a global coordinate system and global starting point for grid layout
  • GAF is independent of the underlying ad-hoc routing protocol
virtual grid size determination
Virtual Grid Size Determination
  • r: grid size, R: deterministic radio range
  • r2 + (2r)2 R2
  • r  R/sqrt(5)
parameters settings
Parameters settings
  • enat: estimated node active time
  • enlt: estimated node lifetime
  • Td,Ta, Ts: discovery, active,

and sleep timers

  • Ta = enlt/2
  • Ts = [enat/2, enat]
  • Node ranking:
    • Active > discovery (only one node active per grid)
    • Same state, higher enlt --> higher rank (longer expected time first)
    • Node ids to break ties
slide19
CEC
  • Cluster-based Energy Conservation
  • Nodes are organized into overlapping clusters
  • A cluster is defined as a subset of nodes that are mutually reachable in at most 2 hops
cluster formation
Cluster Formation
  • Cluster-head Selection: longest lifetime of all its neighbors (breaking ties by node id)
  • Gateway Node Selection:
    • primary gateways have higher priority
    • gateways with more cluster-head neighbors have higher priority
    • gateways with longer lifetime have higher priority
challenges for local healing of solid disc clustering

(

(

)

(

)

)

(

)

cascading

)

(

)

(

)

(

)

(

A

B

new node

Challenges for local healing of solid-disc clustering
  • Equi-radius solid-disc clustering with bounded overlaps is not achievable in a distributed and local manner
floc protocol
FLOC protocol
  • Solid-disc clustering with bounded overlaps is achievable in a distributed and local manner for approximately equal radius
    • Stretch factor, m≥2, produces partitioning that respects solid-disc
      • Each clusterhead has all the nodes within unit radius of itself as members, and is allowed to have nodes up to m away of itself
  • FLOC is locally self-healing, for m≥2
    • Faults and changes are contained within the respective cluster or within the immediate neighboring clusters
floc program
FLOC program …
  • By taking unit distance to be reliable comm. radius & m be maximum comm. radius, FLOC
    • exploits the double-band nature of wireless radio-model
    • achieves communication- and energy-efficient clustering
  • FLOC achieves clustering in O(1) time regardless of the size of the network
    • Time, T, depends only on the density of nodes & is constant
    • Through simulations and implementations, we suggest a suitable value for T for achieving fast clustering without compromising the quality of resulting clusters
model
Model
  • Geometric network, e.g., 2-D coordinate plane
  • Radio model is double-band *
    • Reliable communication within unit distance = in-band
    • Unreliable communication within 1 < d < m = out-band
  • Nodes have i-band/ o-band estimation capability
    • RSSI-based using signal-strength as indicator of distance
    • Statistics-based using average link quality as an indicator
  • Fault model
    • Fail-stop and crash
    • New nodes can join the network
problem statement
Problem statement
  • A distributed, local, scalable, and self-stabilizing clustering program, FLOC, to construct network partitions such that
    • a unique node is designated as a leader of each cluster
    • all nodes in the i-band of each leader belong to that cluster
    • maximum distance of a node from its leader is m
    • each node belongs to a cluster
    • no node belongs to multiple clusters
justification for stretch factor 2

(

(

)

(

)

)

(

)

)

(

)

(

)

(

)

new node subsumed

Justification for stretch factor > 2
  • For m≥2 local healing is achieved: a new node is
    • either subsumed by one of the existing clusters,
    • or allowed to form its own cluster without disturbing neighboring clusters

)

(

(

)

(

(

)

)

(

)

new cluster

basic floc program
Basic FLOC program
  • Status variable at each node j:
    • idle : j is not part of any cluster and j is not a candidate
    • cand : j wants to be a clusterhead, j is a candidate
    • c_head : j is a clusterhead, j.cluster_id==j
    • i_band : j is an inner-band member of a clusterhead j.cluster_id; a clusterhead itself is an i_band member
    • o_band :j is an outer-band member of j.cluster_id
  • The effects of the 6 actions on the status variable:
floc actions
FLOC actions
  • idle Λ random wait time from [0…T] expired  become a cand and bcast cand msg
  • receiver of cand msg is within in-band Λ its status is i_band  receiver sends a conflict msg to the cand
  • candidate hears a conflict msg  candidate becomes o_band for respective cluster
  • candidacy period Δ expires  cand becomes c_head, and bcasts c_head message
  • idle Λ c_head message is heard  become i_band or o_band resp.
  • receiver of c_head msg is within in-band Λis o_band  receiver joins cluster as i_band
floc is fast
FLOC is fast
  • Assumption: atomicity condition of candidacy is observed by T
  • Theorem: Regardless of the network size FLOC produces the partitioning in T+Δ time
  • Proof:
    • An action is enabled at every node within at most T time
    • Once an action is enabled at a node, the node is assigned a clusterhead within Δtime
    • Once a node is assigned to a clusterhead, this property cannot be violated
      • action 6 makes a node change its clusterhead to become an i-band member, action 2 does not cause clusterhead to change
floc is locally healing
FLOC is locally-healing
  • Node failures
    • inherently robust to failure of non-clusterhead members
    • clusterhead failure detected via “lease” mechanism, the orphaned nodes execute clustering ---see node additions
  • Node additions
    • either join existing cluster, or
    • form a new cluster without disturbing immediate neighboring clusters
extensions to basic floc algorithm
Extensions to basic FLOC algorithm
  • Extended FLOC algorithm ensures that solid-disc property is satisfied even when atomicity of candidacy is violated occasionally
  • Insight: Bcast is an atomic operation
    • Candidate that bcasts first locks the nodes in the vicinity for Δ time
    • Later candidates become idle again by dropping their candidacy when they find some of the nodes are locked
  • 4 additional actions to implement this idea
simulation for determining t
Simulation for determining T
  • Prowler, realistic wireless sensor network simulator
    • MAC delay 25ms
  • Tradeoffs in selection of T
    • Short T leads to network contention, and hence, message losses
    • Tradeoff between faster completion time and quality of clustering
  • Scalability wrt network size
    • T depends only on the node density
      • In our experiments, the degree of each node is between 4-12
    • a constant T is applicable for arbitrarily large-scale networks
implementation
Implementation
  • Mica2 mote platform, 5-by-5 grid
  • Confirms simulation
gs 3 scalability via locality
GS3: Scalability via locality
  • Locality is hard for some graph problems
    • e.g., self-configuration and self-healing of routing tree
  • An ideal goal for locality: self-healing should be a function of the size of perturbation (in time, space, and energy)
  • Locality depends on model
system model
System model
  • System
    • multiple “small” nodes and one “big” node, on a plane
    • node distribution
      • density: ( Rts.t. with high probability,

there are multiple nodes in any circular area of radius Rt)

      • localization: relative location between nodes can be estimated
  • Perturbations
    • dynamic nodes
      • joins, leaves (deaths), state corruptions
    • mobile nodes
problem geography aware self configuration
Problem: Geography-aware self-configuration
  • Geographic radius of clusters is crucial
    • for communication quality, energy dissipation, data aggregations & applications
  • Problem statement
    • Given

R: ideal cell radius (R > Rt)

    • Construct a set of cells , connected via a “head” node in each cell s.t.
      • radius of each cell is in [ R-c , R+c ] , where c = f (Rt)
      • each node belongs to only one cell
      • cells and the connectivity graph over head nodes self-heal locally
static networks
Static networks
  • An ideal case:
  • In reality: no node may exist at some geometric centers (ILs), but, with high probability there are nodes no more than Rt away from any IL

(IL = Ideal Location)

how to find the set of cell heads
How to find the set of cell heads
  • Bottom-up ?
    • hard to guarantee the placement and size of clusters
  • Top-down w.r.t. big node
    • use diffusing computation
    • but, accumulation in deviation of head location from IL is a problem

i

organizing neighboring clusters heads
Organizing neighboring clusters & heads

Deviation problem is handled locally

  • instead of using real locations, node i uses its and its parent’s ILs
  • i calculates the ILs of next band cells in its search region < LD , RD >
    • big node: <0o , 360o>
    • other nodes: <-60o-a , 60o+a> , where a Sin-1(Rt / R)
  • for each IL, i ranks nodes within Rt radius of the IL (by <D, A>), and selects the highest ranked node as the corresponding cluster head
summary static networks
Summary: static networks
  • Cell structure is hexagonal
    • cell radius:
  • Time taken to form the structure is (Db), where Db = the maximum distance between the big node and the small nodes
  • Scalability in self-configuration:
    • local coordination: only with nodes within range
    • local knowledge: each node maintains info about a constant number of nearby nodes
dynamic networks
Dynamic networks
  • Dynamics include:
    • node join, leave (death), state corruption
  • Common vs. rare
    • common perturbations: node density is preserved
    • rare perturbations: node density is destroyed
  • Scalable self-healing is achieved via locality in:
    • intra-cell healing
    • inter-cell healing
    • sanity checking of state (invariants)
local intra cell healing
Local intra-cell healing
  • Head shift
    • upon head leaving (death)
    • local in a radius of Rt
  • Cell shift
    • upon the death of all the nodes in an area of radius Rt
    • local in a radius of R
    • independent but consistent shift at individual cells sliding of the global head level structure
local inter cell healing sanity checking
Local inter-cell healing & sanity checking
  • Local inter-cell healing :

upon failure of intra-cell healing at head j,

    • first, the parent of j tries to find a new head j’
    • if that fails, the children of j find new parents
  • Local sanity checking of state invariants :

upon detecting violation of the hexagonality property,

    • node corrects itself after checking with its neighbors
    • when state perturbation includes several nodes, the perturbed region corrects itself from the outside going in, and all nodes are corrected within time proportional to size of perturbed region
summary dynamic networks
Summary: dynamic networks
  • Cell radius
    • for cells not adjoining any gap:
    • for cells adjoining a gap:
  • Head tree is now minimum distance tree rooted at the big node
  • Stabilization time from perturbed state: (Dp), where Dp = diameter of the continuously perturbed area
summary dynamic networks contd
Summary: dynamic networks (contd.)
  • Scalability in self-healing:
    • local fault-containment and healing
    • local knowledge
  • Local healing and fault-containment enables
    • stable cell structure
    • lengthened lifetime: (nc) , where nc = the number of nodes in a cell
related work
Related work
  • Cellular hexagon structure (Mac Donald ’79)
    • Preconfigured & not considering self-healing
  • LEACH (Heinzelman et al’00)
    • No guarantee about the placement and size of clusters
    • Perturbations dealt with by globally repeating the whole clustering process
  • Logical-radius based clustering (in Banerjee ’01)
    • non-local cluster maintenance, and no consideration of state corruption
    • only logical radius  long links and link asymmetry are possible
    • multiple rounds of diffusion
ascent
ASCENT
  • Adaptive Self-Configuring sEnsor Networks Topologies
  • Observation: different applications may require the underlying topology to have different characteristics. For example:
    • Minimal
    • Homogeneous with a certain degree of connectivity
    • Heterogeneous with different degrees of connectivity in different regions
  • Examples of these different regions may be:
    • Along a data flow path
    • Avoiding a data flow path
    • In the border of an event of interest
  • Input: application tolerance specified in terms of acceptable loss rate at any node
model1
Model
  • Adapt to empirical measurements of link quality: each node assesses its connectivity & adapts its participation in the multi-hop topology based on the measured operating region
  • Assumptions: ASCENT needs to
    • turn off the radio (sleep state)
    • turn the NIC/MAC in promiscuous mode (passive state)
  • ASCENT runs on top of MAC and below routing; does not uses any information gathered by routing
slide53

Neighbor Announcements

Messages

Help Messages

Data Message

Data Message

Source

Source

Sink

Sink

Sink

Source

Passive Neighbor

Active Neighbor

(a) Communication Hole

(b) Self-configuration transition

(c) Final State

ASCENT Basics

  • Node state: active or passive
    • Active nodes are in topology & forward data packets (using orthogonal routing mechanism that runs on topology)
    • Passive nodes can sleep or collect network measurements
  • Each node measures # of neighbors and packet loss locally
  • Each node then decides to join the network topology or to adapt (e.g. reducing its duty cycle to save energy)
slide54

after Tt

Test

Active

  • neighbors < NT
  • and
  • loss > LT
  • loss < LT & help

neighbors > NT (high ID for ties)

or

loss > loss T0

after Tp

Sleep

Passive

after Ts

State Transitions

NT: neighbor threshold

LT: loss threshold

Tx: state timer values (x = p: passive, s: sleep, t: test)

details
Details
  • Each node adds a sequence number per packet (for loss detection)
  • Neighbor estimator: based on a neighbor loss threshold (NLT) = 1 – 1/N (N: number of neighbors in the previous cycle)
  • Neighbor threshold value (NT) determines the average degree of connectivity in the network
  • Loss threshold determines the maximum data loss application can tolerate
  • Relation between Tp/Ts (passive & sleep timers) determines amount of energy savings and convergence time
performance results2
Performance Results

Energy Savings (normalized to the Active case, all nodes turn on) as a function of density. ASCENT provides energy savings up to 5.5 times for high density scenarios

slide57

ASCENT Energy Savings Analysis

NT: neighbor threshold

Tp: passive state timer

Ts: sleep state timer

Sleep: power radio off

Idle: power radio on

= Tp/Ts

= Sleep/Idle = 0.004

topology control from a sensing perspective
Topology Control from a Sensing Perspective

So far we have considered only the communications perspective

Sensing coverage model:

  • typically unit disk sensing
  • note: depends on object being sensed

Node deployment model:

  • deterministic with (no failures or with isolated failures)
  • approximated by a pdf or is random (as a result of rampant errors)

Coverage requirements:

  • Point coverage (deterministic or probabilistic guarantee)
  • Barrier coverage (deterministic or probabilistic guarantee)
  • Worst-case coverage: least exposed path
  • Tracking coverage: any uncovered path has length at most l
sensing coverage references
Sensing Coverage References
  • Survey:

“Coverage in Wireless Sensor Network”, Mihaela Cardei, Jie Wu

  • For 1-coverage:

Pater Hall, "An Introduction to the Theory of Coverage Processes”, 1988

  • For k-coverage:

Santosh Kumar and Balogh, Mobisys 2004

  • For k-coverage poisson deployment:

Honghai Zhang and Jennifer Hou, Mobihoc 2004

coverage results
Coverage Results
  • 1-point coverage with deterministic placement:
    • hexagonal layout is optimal
  • k-point coverage with deterministic placement :
    • question of optimal placement is open
  • k-point probabilistic coverage:
    • almost always k-coverage for poisson deployment

nr2 ≥ ln(n) + k ln(ln(n)) + … (error term)

where n is #sensors and r is sensing radius

    • almost always k-coverage for random uniform deployment

has essentially same result

coverage algorithms
Coverage Algorithms
  • Checking whether network is not suitably covered
    • point coverage violation check is possible locally
  • Maintaining coverage via sleep-wakeup
    • optimal scheme is NP-Complete, if deployment unknown

(so heuristics used)

    • random independent scheduling, if deployment uniformly random
    • sentry rotation between redundant nodes in each cluster/region
both communication and sensing topology control
Both Communication and Sensing Topology Control
  • Relation between sensing radius and communication radius
  • If Comm radius ≥ 2 x Sensing radius

then (k-coverage  k-connectivity)

ad