clean slate ubicomp device system architecture l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Clean Slate Ubicomp Device System Architecture PowerPoint Presentation
Download Presentation
Clean Slate Ubicomp Device System Architecture

Loading in 2 Seconds...

play fullscreen
1 / 78

Clean Slate Ubicomp Device System Architecture - PowerPoint PPT Presentation


  • 146 Views
  • Uploaded on

Clean Slate Ubicomp Device System Architecture. Jon Crowcroft, http://www.cl.cam.ac.uk/~jac22. Thank you but you are in the opposite direction!. I can also carry for you!. I have 100M bytes of data, who can carry for me?. Give it to me, I have 1G bytes phone flash.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Clean Slate Ubicomp Device System Architecture' - webster


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
clean slate ubicomp device system architecture

Clean Slate Ubicomp Device System Architecture

Jon Crowcroft,

http://www.cl.cam.ac.uk/~jac22

slide2

Thank you but you are in the opposite direction!

I can also carry for you!

I have 100M bytes of data, who can carry for me?

Give it to me, I have 1G bytes phone flash.

Don’t give to me! I am running out of storage.

Reach an access point.

There is one in my pocket…

Internet

Search La Bonheme.mp3 for me

Finally, it arrive…

Search La Bonheme.mp3 for me

Search La Bonheme.mp3 for me

system architecture

System Architecture

Radical Device Stuff +

Cloud Stuff

Research: split/migrate/cache/proxy/dynamics

motivation 1
Motivation #1
  • Mobile users currently have a very bad experience with networking
    • Applications do not work without networking infrastructure such as 802.11 access points or cell phone data coverage
    • Local connectivity is plentiful (WiFi, Bluetooth, etc), but very hard for end users to configure and use
  • Example: Train/plane on the way to London
    • How to send a colleague sitting opposite some slides to review?
    • How to get information on restaurants in London? (Clue: someone else is bound to have it cached on their device)
underlying problem
Underlying Problem
  • Applications tied to network details and operations via use of IP-based socks interface
    • What interface to use
    • How to route to destination
    • When to connect
  • Apps survive by using directory services
    • Address book maps names to email addresses
    • Google maps search keywords to URLs
    • DNS maps domain names to IP addresses
  • Directory services mean infrastructure
phase transitions and networks
Phase transitions and networks
  • Solid networks: wired, or fixed wireless mesh
    • Long lived end-to-end routes
  • Liquid networks: Mobile Ad-Hoc Networking (MANET)
    • Short lived end-to-gateway routes
  • Gaseous networks: Delay Tolerant Networking (DTN), Pocket Switched Networking (PSN)
    • No routes at all!
    • Opportunistic, store and forward networking
    • One way paths, asymmetry, node mobility carries data
  • Haggle targets all three, so must work in most general case, i.e. “gaseous”
haggle overview
Haggle Overview

Clean-slate redesign of mobile node

Spans MAC to application layers (inclusive), but is itself layerless – uses six “managers”

Key Features:

Store-and-forward architecture with data persisting inside Haggle not separate file sys

App-layer protocols (SMTP, HTTP, etc) moved into Haggle rather than apps themselves

Forwarding decisions made on “name graphs” allowing just-in-time binding

Resource management integrated

API is asynchronous and data-centric

overview of d 3 n architecture

// Registering a proximity event listenerEvent.register( Event.OnEncounter, fun d:device -> if d.nID = “B” && distance(self,d) < 3 then

dispatch NodeEncountered(d);

)

Overview of D3N Architecture

Each node is responsible for storing, indexing, searching, and delivering data

Primitive functions associated with core D3N calculus syntax are part of the runtime system

Prototype on MS Mobile .Net + Haggle Runtime

8

slide9

Data-Driven Declarative Networking (D3N)

How to program distributed computation?

Declarative is new idea in networking

  • Ex. P2: Building overlay: Network properties specified declaratively
  • Use of Functional Programming: Functions are first class values and can be both the input and the result of other functions
  • FP: Simple/clean semantics, expressive, inherent parallelism
  • Queries/Filer etc. can be expressed as higher-order functions that are applied in a distributed setting

Runtime system provides the necessary native library functions that are specific to each device

  • Prototype: F# + .NET for mobile devices

Similar approach as LINQ project

  • Extends .NET Framework with language integrated operations for querying, storing and transforming data (target to .NET)
slide10

Example: Query to Networks

D3N:

select name from poll() where institute = “Computer Laboratory”

poll()

|> filter (fun r -> r.institute = “Computer Laboratory”)

|> map (fun r -> r.name)

F#:

E

C

A

B

Message:

(code, nodeid, TTL, data)

D

Queries are part of source level syntax

  • Distributed execution (single node programmer model)
  • Familiar syntax
trust the cloud society cloud atlas

Trust, the Cloud Society:-Cloud Atlas…

Anil Madhavapeddy (Cambridge/Imperial) and

Daniele Quercia (UCL/MIT)…

who do you trust
Who do you trust?
  • Your phone
    • lost/stolen/broken/hacked
  • The cloud
    • Unreachable, goes broke, spy on you
  • Solution spaces
    • Migrate cloud state on/off fone
    • Need encapsulation of this (social vm)?
    • P2P soln too (nearby devices)
    • Resource (sensor) pooling
  • Social Sign on Scales/Usability?
why measure human mobility
Why measure human mobility?
  • Mobility increases capacity of dense mobile network [tse/grossglauser]
  • Also create dis-connectivities
  • Human mobility patterns determine communication opportunities
  • And discover social groupings - see later for resource allocation (e.g. spectrum)
experimental setup
Experimental setup
  • iMotes
    • ARM processor
    • Bluetooth radio
    • 64k flash memory
  • Bluetooth Inquiries
    • 5 seconds every 2 minutes
    • Log {MAC address, start time, end time} tuple of each contact
contact and inter contact time
Contact and Inter-contact time
  • Inter-contact is important
    • Affect the feasibility of opportunistic network
    • Nature of distribution affects choice of forwarding algorithm
    • Rarely studied
infocom 2005 experiment
Infocom 2005 experiment
  • 54 iMotes distributed
  • Experiment duration: 3 days
  • 41 yielded useful data
  • 11 with battery or packaging problem
  • 2 not returned
  • [data on crawdad, have several other datasets from Hong Kong, Barcelona, Cambridge, Lisbon, and of course other people have done this too:-]
brief summary of data
Brief summary of data
  • 41 iMotes
  • 182 external devices
  • 22459 contacts between iMotes
  • 5791 contacts between iMote/external device
  • External devices are non-iMote devices in the environment, e.g. BT mobile phone, Laptop.
contacts seen by an imote
Contacts seen by an iMote

iMoites

External Devices

what do we see
What do we see?
  • Power law distribution for contact and Inter-contact time
  • Both iMotes and external nodes
  • Does not agree with currently used mobility model, e.g. random way point
  • Power law coefficient < 1
k clique communities in infocom06 dataset

Barcelona Group

Paris Groups

Lausanne Group

Barcelona Group

Paris Group A

Paris Group B

Lausanne Group

K-clique Communities in Infocom06 Dataset

K=4

data objects dos
Data Objects (DOs)

Message

  • DO = set of attributes = {type, value} pairs
    • Exposing metadata facilitates search
  • Can link to other DOs
    • To structure data that should be kept together
    • To allow apps to categorise/organise
  • Apps/Haggle managers can “claim” DOs to assert ownership

Attachment

do filters
DO Filters
  • Queries on fields of data objects
  • E.g. “content-type” EQUALS “text/html” AND “keywords” INCLUDES “news” AND “timestamp” >= (now() – 1 hour)
  • DO filters are also a special case of DOs
  • Haggle itself can match DOFilters to DOs – apps don’t have to be involved
  • Can be persistent or be sent remotely…
layerless naming
Layerless Naming
  • Haggle needs just-in-time binding of user level names to destinations
  • Q: when messaging a user, should you send to their email server or look in the neighbourhood for their laptop’s MAC address?
    • A: Both, even if you already reached one. E.g. you can send email to a server and later pass them in the corridor, or you could see their laptop directly, but they aren’t carrying it today so you’d better email it too…
  • Current layered model requires ahead-of-time resolution by the user themselves in the choice of application (e.g. email vs SMS)
name graphs comprised of name objects
Name Graphs comprised of Name Objects
  • Name Graph represents full variety of ways to reach a user-level name
  • NO = special class of DO
  • Used as destinations for data in transit
  • Names and links between names obtained from
    • Applications
    • Network interfaces
    • Neighbours
    • Data passing through
    • Directories
forwarding objects
Forwarding Objects
  • Special class of DO used for storing metadata about forwarding
    • TTL,expiry, etc
  • Since full structure of naming and data is sent, “intermediate” nodes are empowered to:
    • Use data as they see fit
    • Use up-to-date state and whole name graph to make best forwarding decision

FO

DO

DO

DO

DO

NO

NO

NO

NO

connectivities and protocols
Connectivities and Protocols
  • Connectivities (network interfaces) say which “neighbours” are available (including “Internet”)
  • Protocols use this to determine which NOs they can deliver to, on a per-FO basis
    • P2P protocol says it can deliver any FO to neighbour-derived NOs if corresponding neighbour is visible
    • HTTP protocol can deliver FOs which contain a DOFilter asking for a URL, if “Internet” neighbour is present
  • Protocols can also perform tasks directly
    • POP protocol creates EmailReceiveTask when Internet neighbour is visible
forwarding algorithms
Forwarding Algorithms

{Protocol, Name, Neighbour}

  • Forwarding algorithms create Forwarding Tasks to send data to suitable next-hops
  • Can also create Tasks to perform signalling
  • Many forwarding algs can run simultaneously

x

x

x

algorithm 1

FOs

algorithm 2

x = scalar

“benefit” of

forwarding task

x

x

x

x

x

resource management tasks and cost benefit
Resource Management – Tasks and Cost/Benefit
  • Task( Benefit getBenefit(), Cost getCost() )
  • Cost = {Energy, Time on network X, Money}
  • Benefit = {App, User, Forwarding}
  • Resource manager does cost/benefit comparison using some utility function
  • Owners preferences must also be applied
    • E.g. don’t spend my money on others’ traffic
implications of using tasks
Implications of using Tasks
  • Tasks can come from all managers – http fetch, email receive, neighbour discovery, etc
    • Key illustration of “layerless” architecture
  • Tasks are executed at dynamic times/intervals (or not done at all) based on
    • Current resource costs
    • Other tasks
    • User priorities/preferences
  • “Too many” tasks is fine, even encouraged – unlike with IP where network operations are queued
    • E.g. it is hard for a current web client to express “predictively download these pages, if energy and bandwidth are plentiful and free”
neighbour discovery
NeighbourDiscovery

Applications (messaging, web, etc)

Haggle Application Interface

Protocol

Resource

Data

Name

Connectivity

  • Connectivities have responsibility for neighbour discovery
  • Protocols use neighbours to “mark” NOs “nearby”
  • Resource management controls frequency of neighbour discovery

Forwarding

1. Set task

2. Execute

5. Insert new names

4. Neighbour list

3: Discovery

Connectivities (WiFi, BT, GPRS, etc)

sending data
API for send split into three (sets of) calls

FO can be sent to many nodes using many protocols

Asynchronous

Benefits of send change with time and context

SendingData

Applications (messaging, web, etc)

3. Call “send”

2. Insert names

1. Insert data

Haggle Application Interface

Protocol

Name

Connectivity

Data

Resource

5. Set task “send via X”

4. Decide next hop X

6. Execute (when worth it)

Forwarding

8. Get & encode data

7. Send!

10. Connect & transmit

9. Raw data

Connectivities (WiFi, BT, GPRS, etc)

receiving data
ReceivingData

Applications (messaging, web, etc)

7. Notify interested apps

Haggle Application Interface

8. Mine names

Protocol

Name

Data

Connectivity

Resource

  • Incoming data is still processed using tasks
  • Eventually inserted into Data Manager
  • Apps “listen” by registering interests (DOFilters)

5. Incoming data

6. Insert data objects

Forwarding

2. Incoming connection

1. Bind to networks

4. Query resource use

3. Connection

Connectivities (WiFi, BT, GPRS, etc)

aside on security etc
Aside on security etc
  • Security was “left out” for version 1 in this 4-year EU project, but threats were considered
  • Data security can reuse existing solutions of authentication/encryption
    • With proviso that it is not possible to rely on a synchronously available trusted third party
  • Some new threats to privacy
    • Neighbourhood visibility means trackability
    • Name graphs could include quite private information
  • Incentives to cooperate an issue
    • Why should I spend any bandwidth/energy on your stuff?
implementation status
Implementation Status
  • Implemented in Java for XP and Linux
  • Ported to Windows Mobile using C# (Java also runs)
  • Code at http://haggle.cvs.sourceforge.net/
  • Connectivity: WiFi, GPRS, (Bluetooth, SMS)
  • Protocol: P2P, SMTP, POP, HTTP, (Search)
  • Data: SQL, In-Memory, (SQLite)
  • Name, Resource, Forwarding: “Good defaults”
  • ForwardingAlgs: Direct, Epidemic, (…)
  • Apps: Email, Web (both via proxies)
slide42
Eiko Yoneki, Ioannis Baltopoulos and Jon Crowcroft

University of Cambridge Computer Laboratory

Systems Research Group

D3N*Programming Distributed Computation in Pocket Switched NetworksExtra slides…

*Data Driven Declarative Networking

slide43

Declarative Networking

Declarative is new idea in networking

  • e.g. Search: ‘what to look for’ rather than ‘how to look for’
  • Abstract complexity in networking/data processing

P2: Building overlay using Overlog

  • Network properties specified declaratively

LINQ: extend .NET with language integrated operations for query/store/transform data

DryadLINQ: extends LINQ similar to Google’s Map-Reduce

  • Automatic parallelization from sequential declarative code

Opis: Functional-reactive approach in OCaml

slide44

D3N Data-Driven Declarative Networking

How to program distributed computation?

Use Declarative Networking

  • Use of Functional Programming
    • Simple/clean semantics, expressive, inherent parallelism
  • Queries/Filer etc. can be expressed as higher-order functions that are applied in a distributed setting

Runtime system provides the necessary native library functions that are specific to each device

  • Prototype: F# + .NET for mobile devices
slide45

D3N and Functional Programming I

Functions are first-class values

  • They can be both input and output of other functions
  • They can be shared between different nodes (code mobility)
  • Not only data but also functions flow

Language syntax does not have state

  • Variables are only ever assigned once; hence reasoning about programs becomes easier

(of course message passing and threads  encode states)

Strongly typed

  • Static assurance that the program does not ‘go wrong’ at runtime unlike script languages

Type inference

  • Types are not declared explicitly, hence programs are less verbose
slide46

D3N and Functional Programming II

Integrated features from query language

  • Assurance as in logical programming

Appropriate level of abstraction

  • Imperative languages closely specify the implementation details (how); declarative languages abstract too much (what)
  • Imperative – predictable result about performance
  • Declarative language – abstract away many implementation issues
overview of d 3 n architecture47
Overview of D3N Architecture

Each node is responsible for storing, indexing, searching, and delivering data

Primitive functions associated with core D3N calculus syntax are part of the runtime system

Prototype on MS Mobile .NET

47

d 3 n syntax and semantics i
D3N Syntax and Semantics I

Very few primitives

  • Integer, strings, lists, floating point numbers and other primitives are recovered through constructor application

Standard FP features

  • Declaring and naming functions through let-bindings
  • Calling primitive and user-defined functions (function application)
  • Pattern matching (similar to switch statement)
  • Standard features as ordinary programming languages (e.g. ML or Haskell)

48

d 3 n syntax and semantics ii
D3N Syntax and Semantics II

Advanced features

  • Concurrency (fork)
  • Communication (send/receive primitives)
  • Query expressions (local and distributed select)

49

d 3 n language core calculus syntax

// Registering a proximity event listenerEvent.register( Event.OnEncounter, fun d:device -> if d.nID = “B” && distance(self,d) < 3 then

dispatch NodeEncountered(d);

)

D3N Language (Core Calculus Syntax)

50

runtime system
Runtime System

Language relies on a small runtime system

  • Operations implemented in the runtime system written in F#

Each node is responsible on data:

  • Storing
  • Indexing
  • Searching
  • Delivering
  • Data has Time-To-Live (TTL)
  • Each node propagates data to the other nodes.
  • A search query w/TTL travels within the network until it expires
  • When the node has the matching data, it forwards the data
  • Each node gossips its own metadata when it meets other nodes

51

kernel event handler
Kernel Event Handler

Kernel maintains

  • An event queue (queue)
  • A list of functions for each event (fenc, fdep)

Kernel processes

  • It removes an event from the front of the queue (e)
  • Pattern matches against the event type
  • Calls all the registered functions for the particular event

52

slide53

Example: Query to Networks

D3N:

select name from poll() where institute = “Computer Laboratory”

poll()

|> filter (fun r -> r.institute = “Computer Laboratory”)

|> map (fun r -> r.name)

F#:

E

C

A

B

Message:

(code, nodeid, TTL, data)

D

Queries are part of source level syntax

  • Distributed execution (single node programmer model)
  • Familiar syntax
example vote among nodes

// Registering a proximity event listenerEvent.register( Event.OnEncounter, fun d:device -> if d.nID = “B” && distance(self,d) < 3 then

dispatch NodeEncountered(d);

)

Example: Vote among Nodes

Voting application: implements a distributed voting protocol of choosing location for dinner

Rules

  • Each node votes once
  • A single node initiates the application
  • Ballots should not be counted twice
  • No infrastructure-base communication is available or it is too expensive

Top-level expression

  • Node A sends the code to all nodes
  • Nodes map in parallel (pmap) the function voteOfNodeto their local data, and send back the result to A
  • Node A aggregates (reduce) the results from all nodes and produces a final tally

54

sequential map function smap

// Registering a proximity event listenerEvent.register( Event.OnEncounter, fun d:device -> if d.nID = “B” && distance(self,d) < 3 then

dispatch NodeEncountered(d);

)

Sequential Map function (smap)

Inner working

  • It sends the code to execute on the remote node
  • It blocks waiting for a response waiting from the node
  • Continues mapping the function to the rest of the nodes in a sequential fashion
  • An unavailable node blocks the entire computation

55

parallel map function pmap
Parallel Map Function (pmap)

A

pmap

C

E

F

G

B

D

Inner working

  • Similar to the sequential case
  • The send/receive for each node happen in a separate thread
  • An unavailable node does not block the entire computation

56

reduce function

// Registering a proximity event listenerEvent.register( Event.OnEncounter, fun d:device -> if d.nID = “B” && distance(self,d) < 3 then

dispatch NodeEncountered(d);

)

Reduce Function

Inner working

  • The reduce function aggregates the results from a map
  • The reduce gets executed on the initiator node
  • All results must have been received before the reduce can proceed

57

cascaded map function

// Registering a proximity event listenerEvent.register( Event.OnEncounter, fun d:device -> if d.nID = “B” && distance(self,d) < 3 then

dispatch NodeEncountered(d);

)

Cascaded Map Function

D

G

B

E

F

A

C

(a) Social Graph

D

D

B

B

E

E

A

F

C

(c) Nodes for Map at B

(b) Nodes for Map at A

Social Graph can be exploited for map function

  • Logical topology extracted from social networks
  • Construct a minimum spanning tree with node A
  • Use tree as navigation of task

59

outlook and future work

// Registering a proximity event listenerEvent.register( Event.OnEncounter, fun d:device -> if d.nID = “B” && distance(self,d) < 3 then

dispatch NodeEncountered(d);

)

Outlook and Future Work

Current reference implementation:

  • F# targeting .NET platform taking advantage of a vast collection of .NET libraries for implementing D3N primitives

Future work:

  • Security issues are currently out of the scope of this paper. Executable code migrating from node to node
  • Validate and verify the correctness of the design by implementing a compiler targeting various mobile devices
  • Disclose code in public domain

http://www.cl.cam.ac.uk/~ey204

Email: eiko.yoneki@cl.cam.ac.uk

prototype application email
Standard email client

Haggle provides localhost SMTP and POP servers

Emails become FO/DO/NOs

Protocols

SMTP marks email addrs “nearby” when Internet is available

P2P marks MAC addrs “nearby” when visible

POP creates “task” to receive email when Internet visible (allows dynamic email check interval…)

Email client on Haggle

email

Internet

email

Haggle Data Object

encapsulated in email

Standard

email

client

P2P

email

Haggle

DOs

Email client on Haggle

Prototype Application: Email
prototype application web
HTTP request becomes DO filter for URL

Standard web clients used

Haggle acts as web proxy

Handled via:

Local DOs (including data being routed for others)

HTTP protocol to Internet

P2P protocol to neighbours

Extension: search

Search page translated into DO filter on content not URL

Search protocol indirects via search engine and gets top N links

Prototype Application: Web

Object request

Web client

http

Haggle

Internal

object

cache

Haggle

Internal

object

cache

http

http

Internet

Web server

email performance
Email Performance
  • Haggle/infrastructure and w/out-Haggle are comparable (i.e. proxying etc adds acceptably low overhead)
  • Haggle/ad hoc wins – not just in elapsed time but also because email account would not accept >5Mb attachments
  • Haggle/both is in between, with more variability
    • Due to neighbour sensing, email checking, etc causing 802.11 to switch between AP and adhoc mode often
    • Being addressed with MultiNet approach [Chandra,Bahl2]
    • Also exploring >1 network type (e.g. add GPRS)
future work idea resource aware media sharing
Future work idea:Resource-aware media sharing
  • Devices could do a lot of media sharing proactively
    • Downloading public content predictively (e.g. webpages)
    • Backing up/keeping multiple caches of personal content (e.g. photos, media)
    • Sharing media with friends/family/public
  • But resource management is crucial
    • Contrast laptop in docking station to laptop on plane
    • Proactive tasks can exhaust battery life easily
  • Current architecture does not suffice
    • IP stack is difficult to configure with user priorities (e.g. when should smart phone use GPRS and when WiFi?)
    • Current file system makes it hard for apps to fill disk with “predictive” content
using haggle for resource aware media sharing
Using Haggle forresource-aware media sharing
  • Haggle can store application data predictively and evict it if “more important” data comes along
    • Keep your photo albums in empty laptop space
    • Keep “public” media in TiVo-like fashion
  • Haggle makes forwarding decisions with knowledge of resource consumption and application priorities
    • Doesn’t ship holiday snaps over GPRS from Australia
  • Haggle can share data with neighbours, facilitating use of free fast local connectivity wherever possible
    • Syncing without ActiveSync
implication of power law coefficient
Implication of Power Law Coefficient
  • Large coefficient => Smaller delay
  • Consider 2-hops relaying [tse/grossglauser] analysis [TechRep]
  • Denote power law coefficient as a
  • For a > 2

Any stateless algorithm achieves a finite expected delay.

  • For a > (m+1)/m and #{nodes} ≥2m :

There exist a forwarding algorithm with m copies and a finite expected delay.

  • For a < 1

No stateless algorithm (even flooding) achieve a bounded delay (Orey’s theorem).

what do we see69
What do we see?
  • Nodes are not equal, some active and some not
    • Does not agree with current mobility model, equally distributed.
  • iMotes are seen more often than external address
  • More iMotes pair contact
    • Identify Sharing Communities to improve forwarding algorithm
what do we see71
What do we see?
  • Still a power law distribution for any three-hour period of the day
  • Different power law coefficient for different time
    • Maybe different forwarding algorithm for different time of the day
consequences for mobile networking
Consequences for mobile networking
  • Mobility models needs to be redesigned
    • Exponential decay of inter contact is wrong
    • Mechanisms tested with that model need to be analyzed with new mobility assumptions
  • Stateless forwarding does not work
    • Can we benefit from heterogeneity to forward by communities ?
    • Should we consider different algorithm for different time of the day?
human hubs popularity
Human Hubs: Popularity

Reality

Cambridge

Infocom06

HK

forwarding scheme design space
Forwarding Scheme Design Space

Explicit Social Structure

Bubble

Label

Human Dimension

Structure in Cohesive Group

Clique Label

Network Plane

Rank, Degree

Structure in Degree

slide75

Social Structure Based Forwarding in DTNs & MANETs

BUBBLE RAP: Use of social hubs (e.g. celebrities and postman) as betweenness centrality and combining community structure for improved forwarding efficiency.

Implementation as a forwarding component of Haggle (http://www.haggleproject.org) framework. Haggle provides data centric opportunistic networking with persistence of search, delay tolerance and opportunism.

Metrics and Forwarding:

Preferred neighbors can be

  • clique based on distributed detection, or explicit label (e.g. same squad).
  • based on centrality in detected graph.

Bubble combines both.

Majority of DTN routing based on epidemic, optimized flooding algorithms:

  • Limited number of copies
  • Context-based [PROPHET]
  • Use of mobility [FERRY]

Social Structure Based Forwarding:

BUBBLE

RANK: Centrality based forwarding. Each node has global and local ranking of popularity

LABEL: Community based forwarding

BUBBLE RAP: Combination of centrality and community

75

slide76

Social Structure Based Forwarding in DTNs & MANETs (Cont)

BUBBLE performance versus flood or wait or pure LABEL: Cost reduction in BUBBLE is significant.

BUBBLE will be effective for hierarchical community structure. To be evaluated.

Scaled Haggle experiments (up to 200 devices) in the university campus is in plan. This will demonstrate BUBBLE-LABEL performance in real world.

Social Based Forwarding (e.g. use of community structure and centrality) shows significant improvement for forwarding efficiency

76

the end
The End!
  • Some NOs for you:
    • jamesscott@acm.org
    • http://www.haggleproject.org/
  • Thanks for listening 
  • Any questions?