Multicast eecs 122 lecture 16
This presentation is the property of its rightful owner.
Sponsored Links
1 / 50

Multicast EECS 122: Lecture 16 PowerPoint PPT Presentation


  • 47 Views
  • Uploaded on
  • Presentation posted in: General

Multicast EECS 122: Lecture 16. Department of Electrical Engineering and Computer Sciences University of California Berkeley. Broadcasting to Groups. Many applications are not one-one Broadcast Group collaboration Proxy/Cache updates Resource Discovery

Download Presentation

Multicast EECS 122: Lecture 16

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Multicast eecs 122 lecture 16

MulticastEECS 122: Lecture 16

Department of Electrical Engineering and Computer Sciences

University of California

Berkeley


Broadcasting to groups

Broadcasting to Groups

  • Many applications are not one-one

    • Broadcast

    • Group collaboration

    • Proxy/Cache updates

    • Resource Discovery

  • Packets must reach a Group rather than a single destination

    • Group membership may be dynamic

    • More than one group member might be a source

  • Idea: After a group is established

    • Interested receivers join the group

    • The network takes care of group management

    • Recall RSVP

Webcasts

Radio/TV

Push/IE Channels

Chats

Video Conferencing

Audio Conferencing

Caches and Proxies

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


The multicast service model

Membership access control

open group: anyone can join

closed group: restrictions on joining

Sender access control

anyone can send to group

anyone in group can send to group

only one host can send to group

Packet delivery is best effort

R1 joins G

[G, data]

[G, data]

[G, data]

R0 joins G

[G, data]

Rn-1 joins G

The Multicast service Model

R0

R1

S

Net

.

.

.

Rn-1

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Multicast and layering

Multicast and Layering

  • Multicast can be implemented at different layers

    • data link layer

      • e.g. Ethernet multicast

    • network layer

      • e.g. IP multicast

    • application layer

      • e.g. as an overlay network like Kazaa

  • Which layer is best?

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Multicast implementation issues

Multicast Implementation Issues

  • How are multicast packets addressed?

  • How is join implemented?

  • How is send implemented?

    • How does multicast traffic get routed?

      • This is easy at the link layer and hardest at the network layer

  • How much state is kept and who keeps it?

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Ethernet multicast

Ethernet Multicast

  • Reserve some Ethernet MAC addresses for multicast

  • To join group G

    • network interface card (NIC) normally only listens for packets sent to unicast address A and broadcast address B

    • to join group G, NIC also listens for packets sent to multicast address G (NIC limits number of groups joined)

    • implemented in hardware, so efficient

  • To send to group G

    • packet is flooded on all LAN segments, like broadcast

    • can waste bandwidth, but LANs should not be very large

  • Only host NICs keep state about who has joined  scalable to large number of receivers, groups

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Limitations of data link layer multicast

Limitations of Data Link Layer Multicast

  • Single LAN

    • limited to small number of hosts

    • limited to low diameter latency

    • essentially all the limitations of LANs compared to internetworks

  • Broadcast doesn’t cut it in larger networks

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Ip multicast interconnecting lans

Interconnected LANs

LANs support link-level multicast

Map globally unique multicast address to LAN-based multicast address (LAN-specific algorithm)

IP Group addresses are class D addresses

1110/28 or 224.0.0.0 to 239.255.255.255

IP Multicast: Interconnecting LANS

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Internet group management protocol igmp

Internet Group Management ProtocolIGMP

  • Operates between Router and local Hosts, typically attached via a LAN (e.g., Ethernet)

    • Query response architecture

  • Router periodically queries the local Hosts for group membership information

    • Can be specific or general

  • Hosts receiving query set a random timer before responding

  • First host to respond sends membership reports

  • All the other hosts observe the query and suppress their own repots.

  • To Join send a group send an unsolicited Join

    • Start a group by joining it

  • To leave don’t have to do anything

    • Soft state

Query to

224.0.0.1

Report

Suppresses

Report

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Na ve routing option don t change anything

[R0, data]

[R0, data]

[R1, data]

[Rn-1, data]

[R1, data]

[Rn-1, data]

Naïve Routing Option: Don’t change anything

Point-to point routing

R0

R1

S

Net

.

.

.

Rn-1

Group abstraction not implemented in

the network

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


This approach does not scale

Backbone

ISP

This approach does not scale…

Broadcast

Center

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Instead build trees

Backbone

ISP

Instead build trees

Copy data at routers

At most one copy of a data packet per link

Broadcast

Center

  • Routers keep track of groups in real-time

  • “Path” computation is Tree computation

  • LANs implement layer 2 multicast by broadcasting

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Routing approaches

Routing: Approaches

  • Kinds of Trees

    • Shared Tree

    • Source Specific Trees

  • Tree Computation Methods

  • Intradomain Update methods

    • Build on unicast Link State: MOSPF

    • Build on unicast Distance Vector: DVMRP

    • Protocol Independent: PIM

  • Interdomain routing: BGMP

    • This is still evolving…

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Source specific trees

Source Specific Trees

5

7

Each source is the route of its own tree.

4

8

6

11

2

10

3

1

13

12

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Source specific trees1

Source Specific Trees

5

7

Each source is the route of its own tree.

One tree for each source

4

8

6

11

2

10

3

1

13

12

Can pick “good” trees but lots of state at the routers!

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Shared tree

Shared Tree

5

7

One tree used by all

4

8

6

11

2

10

3

1

13

12

Can’t pick “good” trees but minimal state at the routers

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Tree computation

2

2

1

2

2

12

15

2

2

2

3

2

12

7

11

1

12

2

Tree Computation

  • A tree which connects all the group nodes is a Steiner Tree

  • Finding the min cost Steiner Tree is NP hard

5

7

4

8

6

11

2

10

3

1

13

12

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Tree computation1

2

2

1

2

2

12

15

2

2

2

3

2

12

7

11

1

12

2

Tree Computation

  • A tree which connects all the group nodes is a Steiner Tree

  • Finding the min cost Steiner Tree is NP hard

5

7

4

8

6

11

2

10

3

1

13

12

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Tree computation2

2

2

1

2

2

12

15

2

2

2

3

2

12

7

11

1

12

2

Tree Computation

  • A tree which connects all the group nodes is a Steiner Tree

  • Finding the min cost Steiner Tree is NP hard

  • The tree does not span the network

  • Heuristics are known

5

7

4

8

6

11

2

10

3

1

13

12

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Tree computation3

2

2

1

2

2

12

15

2

2

2

3

2

12

7

11

1

12

2

Tree Computation

  • A tree that connects all of the nodes in the graph is a spanning tree

  • Finding a minimum spanning tree is much easier

5

7

4

8

6

11

2

10

3

1

13

12

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Tree computation4

2

2

1

2

2

12

15

2

2

2

3

2

12

7

11

1

12

2

Tree Computation

  • A tree that connects all of the nodes in the graph is a spanning tree

  • Finding a minimum spanning tree is much easier

5

7

4

8

6

11

2

10

3

1

13

12

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Tree computation5

Tree Computation

  • A tree that connects all of the nodes in the graph is a spanning tree

  • Finding a minimum spanning tree is much easier

  • Prune back to get a multicast tree

2

2

5

7

4

1

2

2

8

12

6

15

2

2

11

2

2

10

3

2

12

7

11

1

3

1

13

12

12

2

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Tree computation6

2

2

1

2

2

12

15

2

2

2

3

2

12

7

11

1

12

2

Tree Computation

  • A tree that connects all of the nodes in the graph is a spanning tree

  • Finding a minimum spanning tree is much easier

  • Prune back to get a multicast tree

5

7

4

8

6

11

2

10

3

1

13

12

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Link state protocols e g mospf

Link State Protocols e.g. MOSPF

  • Use in conjunction with a link state protocol for unicast

  • Enhance the LSP updates with group membership

  • Compute best tree from source

  • Flood Membership in link state advertisements

  • Dynamics are a problem

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Distance vector multicast routing

Distance Vector Multicast Routing

  • An elegant extension to DV routing

  • Use shortest path DV routes to determine if link is on the source-rooted spanning tree

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Distance vector multicast

r

Distance Vector Multicast

  • Extension to DV unicast routing

  • Packet forwarding

    • iff incoming link is shortest path to source

    • out all links except incoming

    • Reverse Path Flooding (RPF)

    • packets always take shortest path

      • assuming delay is symmetric

  • Issues

    • Every link receives each multicast packet, even if no interested hosts: Pruning

    • Some links (LANs) may receive multiple copies: Reverse Path Broadcasting

s:3

s:2

s:3

s:1

s:2

s

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Example

Example

  • Flooding can cause a given packet to be sent multiple times over the same link

  • Solution: Reverse Path Broadcasting

S

x

y

a

duplicate packet

z

b

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Reverse path broadcasting rpb

r

Reverse Path Broadcasting (RPB)

  • Extend DV to eliminate duplicate packets

  • Choose parent router for each link

    • router with shortest path to source

    • lowest address breaks ties

    • each router can compute independently from already known information

    • each router keeps a bitmap with one bit for each of its links

  • Only parent forwards onto link

s:3

C

s:2

s:3

P

s:1

s:2

s

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Identify child links

Identify Child Links

  • Routing updates identify parent

  • Since distances are known, each router can easily figure out if it's the parent for a given link

  • In case of tie, lower address wins

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Reverse path broadcasting rpb1

forward only

to child link

Reverse Path Broadcasting (RPB)

S

5

6

x

y

a

child link of x

for S

z

b

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Don t really want to flood

Don’t really want to flood!

  • This is still a broadcast algorithm – the traffic goes everywhere

  • Need to “Prune” the tree when there are subtrees with no group members

  • Strategy

  • Identify leaf networks with no members

    • IGMP

  • Propagate this information up the subtree

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


How much to prune

How much to Prune?

  • Truncated Reverse Path Broadcasting: Prunes to prevent flooding of all packets

  • Reverse Path Multicasting: More aggressive. Scale router state with the number of active groups

    • Use on-demand pruning so that router group state scales with number of active groups (not all groups)

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Pruning details

Pruning Details

  • Prune (Source,Group) at leaf if no members

    • Send Non-Membership Report (NMR) up tree

  • If all children of router R prune (S,G)

    • Propagate prune for (S,G) to parent R

  • On timeout:

    • Prune dropped

    • Flow is reinstated

    • Down stream routers re-prune

  • Note: again a soft-state approach

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Pruning details1

Pruning Details

  • How to pick prune timers?

    • Too long  large join time

    • Too short  high control overhead

  • What do you do when a member of a group (re)joins?

    • Issue prune-cancellation message (grafts)

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Mbone

IPM

IPM

MBONE

  • What to do if most of the routers in the internet are not multicast enabled?

  • Tunnel between multicast enabled routers

  • Creates an “overlay” network but both operate at Level 3…

  • This is how multicast was first deployed

IP

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Rmp scaling

RMP Scaling

  • State requirements:

    • O(Sources  Groups) active state

  • How to get better scaling?

    • Hierarchical Multicast

    • Core-based Trees

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Core based trees cbt

Core Based Trees (CBT)

  • Pick a “rendevouz point” for the group called the core.

    • Shared tree

  • Unicast packet to core and bounce it back to multicast group

  • Tree construction is receiver-based

    • Joins can be tunneled if required

    • Only nodes on One tree per group tree involved

  • Reduce routing table state from O(S x G) to O(G)

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Example1

Example

  • Group members: M1, M2, M3

  • M1 sends data

root

M1

M2

M3

control (join) messages

data

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Disadvantages

Disadvantages

  • Sub-optimal delay

  • Single point of failure

    • Core goes out and everything lost until error recovery elects a new core

  • Small, local groups with non-local core

    • Need good core selection

    • Optimal choice (computing topological center) is NP complete

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Multicast eecs 122 lecture 16

PIM

  • Popular intradomain method

    • UUNET streaming using this

  • Recognizes that most groups are very sparse

    • Why have all of the routers participate in keeping state?

  • Two modes

    • Dense mode: flood and prune

    • Sparse mode: Core-based shared tree approach with a twist

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Pim sparse mode

PIM Sparse Mode

  • Routers explicitly issue JOIN and Prune messages to the Core

  • Recievers typically send a Join message of the form (*,G)

    • As it propagates towards the core it establishes a new branch of the shared tree

  • To send on the tree, tunnel to the core and then traverse the shared tree

    • This can lead to bad performance

  • To optimize sending from S, the core can send Join message of the form (S,G) to S.

    • Creates a specific path from S to the core

  • Receivers can send (S,G) messages as well to S and gradually replace the shared tree with a source specific tree

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Problems with network layer multicast

Problems with Network Layer Multicast

  • Scales poorly with number of groups

    • A router must maintain state for every group that traverses it

    • many groups traverse core routers

  • Supporting higher level functionality is difficult

    • NLM: best-effort multi-point deliveryservice

    • Reliability and congestion control for NLM complicated

  • Deployment is difficult and slow

    • Difficult to debug problems given the service model

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Nlm reliability

NLM Reliability

  • Assume reliability through retransmission

  • Sender can not keep state about each receiver

    • e.g., what receivers have received

    • number of receivers unknown and possibly very large

  • Sender can not retransmit every lost packet

    • even if only one receiver misses packet, sender must retransmit, lowering throughput

  • N(ACK) implosion

    • described next

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


N ack implosion

(N)ACK Implosion

  • (Positive) acknowledgements

    • ack every n received packets

    • what happens for multicast?

  • Negative acknowledgements

    • only ack when data is lost

    • assume packet 2 is lost

R1

3

2

1

S

R2

R3

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Nack implosion

2?

2?

2?

NACK Implosion

  • When a packet is lost all receivers in the sub-tree originated at the link where the packet is lost send NACKs

R1

3

S

3

R2

R3

3

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Scalable reliable multicast srm

Scalable Reliable Multicast (SRM)

  • Randomize NACKs (request repairs)

  • All traffic including request repairs and repairs are multicast

  • A repair can be sent by any node that heard the request

  • A node suppresses its request repair if another node has just sent a request repair for the same data item

  • A node suppresses a repair if another node has just sent the repair

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Avoiding nack implosions

Avoiding NACK Implosions

  • Every node estimates distance (in time) from every other node

    • Information is carried in session reports (< 5% of bandwidth)

  • Nodes use randomized function of distance to decide when to

    • Send a request repair

    • Reply to a request repair

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Isps charge by bandwidth

Backbone

ISP

ISPs charge by bandwidth

Broadcast

Center

Remember what

interdomain protocols

optimize for….

They make more money

without multicast

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Application layer multicast

Application Layer Multicast

  • Provide multicast functionality above the IP unicast

  • Gateway nodes could be the hosts or multicast gateways in the network

  • Advantages

    • No multicast dial-tone needed

    • Performance can be optimized to application

      • Loss, priorities etc.

    • More control over the topology of the tree

    • Easier to monitor and control groups

  • Disadvantages

    • Scale

    • Performance if just implemented on the hosts (not gateways)

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


Summary

Summary

  • Large amount of work on multicast routing

  • Major problems

    • preventing flooding

    • minimizing state in routers

    • denial-of-service attacks

    • deployment

  • Multicast can be implemented at different layers

    • lower layers optimize performance

    • higher layers provide more functionality

  • IP Multicast still not widely deployed

    • Ethernet multicast is deployed

    • application layer multicast systems are promising

A. Parekh, EE122 S2003. Revised and enhanced F'02 Lectures


  • Login