three topics in parallel communications n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Three Topics in Parallel Communications PowerPoint Presentation
Download Presentation
Three Topics in Parallel Communications

Loading in 2 Seconds...

play fullscreen
1 / 69

Three Topics in Parallel Communications - PowerPoint PPT Presentation


  • 85 Views
  • Uploaded on

Three Topics in Parallel Communications. Public PhD Thesis presentation by Emin Gabrielyan. Parallel communications: bandwidth enhancement or fault-tolerance?. 1854 Cyrus Field started the project of the first transatlantic cable

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Three Topics in Parallel Communications' - marika


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
three topics in parallel communications

Three Topics in Parallel Communications

Public PhD Thesis presentation by Emin Gabrielyan

Emin Gabrielyan, Three Topics in Parallel Communications

parallel communications bandwidth enhancement or fault tolerance
Parallel communications: bandwidth enhancement or fault-tolerance?
  • 1854 Cyrus Field started the project of the first transatlantic cable
  • After four years and four failed expeditions the project was abandoned

Emin Gabrielyan, Three Topics in Parallel Communications

parallel communications bandwidth enhancement or fault tolerance1
Parallel communications: bandwidth enhancement or fault-tolerance?
  • 12 years later
  • Cyrus Field made a new cable (2730 nau. miles)
  • Jul 13, 1866: laying started
  • Jul 27, 1866: the first transatlantic cable between two continents was operating

Emin Gabrielyan, Three Topics in Parallel Communications

parallel communications bandwidth enhancement or fault tolerance2
Parallel communications: bandwidth enhancement or fault-tolerance?
  • The dream of Cirus Field was realized
  • But the he immediately send the Great Eastern back to sea to lay the second cable

Emin Gabrielyan, Three Topics in Parallel Communications

parallel communications bandwidth enhancement or fault tolerance3
Parallel communications: bandwidth enhancement or fault-tolerance?
  • September 17, 1866 – two parallel circuits were sending messages across the Atlantic
  • The transatlantic telegraph circuits operated nearly 100 years

Emin Gabrielyan, Three Topics in Parallel Communications

parallel communications bandwidth enhancement or fault tolerance4
Parallel communications: bandwidth enhancement or fault-tolerance?
  • The transatlantic telegraph circuits were still in operation when:
  • In March 1964 (in a middle of the cold war): Paul Baran presented to US Air Force a project of a survivable communication network

Paul Baran

Emin Gabrielyan, Three Topics in Parallel Communications

parallel communications bandwidth enhancement or fault tolerance5
Parallel communications: bandwidth enhancement or fault-tolerance?
  • According to the theory of Baran
  • Even a moderated number of parallel circuits permits withstanding extremely heavy nuclear attacks

Emin Gabrielyan, Three Topics in Parallel Communications

parallel communications bandwidth enhancement or fault tolerance6
Parallel communications: bandwidth enhancement or fault-tolerance?
  • Four years later, October 1, 1969
  • ARPANET, US DoD, the forerunner of today’s Internet

Emin Gabrielyan, Three Topics in Parallel Communications

bandwidth enhancement by parallelizing the sources and sinks
Bandwidth enhancement by parallelizing the sources and sinks
  • Bandwidth enhancement can be achieved by adding parallel paths
  • But a greater capacity enhancement is achieved if we can replace the senders and destinations with parallel sources and sinks
  • This is possible in parallel I/O (first topic of the thesis)

Emin Gabrielyan, Three Topics in Parallel Communications

parallel transmissions in low latency networks
Parallel transmissions in low latency networks
  • In coarse-grained HPC networks uncoordinated parallel transmissions cause congestion
  • The overall throughput degrades due to conflicts between large indivisible messages
  • Coordination of parallel transmissions is presented in the second part of my thesis

Emin Gabrielyan, Three Topics in Parallel Communications

classical backup parallel circuits for fault tolerance
Classical backup parallel circuits for fault-tolerance
  • Typically the redundant resource remains idle
  • As soon as there is a failure with the primary resource
  • The backup resource replaces the primary one

Emin Gabrielyan, Three Topics in Parallel Communications

parallelism in living organisms

Renal vein

Renal vein

Renal artery

Renal artery

Ureter

Ureter

Parallelism in living organisms
  • A bio-inspired solution is:
  • To use the parallel resources simultaneously

Emin Gabrielyan, Three Topics in Parallel Communications

simultaneous parallelism for fault tolerance in fine grained networks
Simultaneous parallelism for fault-tolerance in fine-grained networks
  • All available paths are used simultaneously for achieving the fault-tolerance
  • We use coding techniques
  • In the third part of my presentation (capillary routing)

Emin Gabrielyan, Three Topics in Parallel Communications

fine granularity parallel i o for cluster computers

Fine Granularity Parallel I/O for Cluster Computers

SFIO, a Striped File parallel I/O

Emin Gabrielyan, Three Topics in Parallel Communications

why is parallel i o required
Why is parallel I/O required
  • Single I/O gateway for cluster computer saturates
  • Does not scale with the size of the cluster

Emin Gabrielyan, Three Topics in Parallel Communications

what is parallel i o for cluster computers
What is Parallel I/O for Cluster Computers
  • Some or all of the cluster computers can be used for parallel I/O

Emin Gabrielyan, Three Topics in Parallel Communications

objectives of parallel i o
Objectives of parallel I/O
  • Resistance to multiple access
  • Scalability
  • High level of parallelism and load balance

Emin Gabrielyan, Three Topics in Parallel Communications

concurrent access by multiple compute nodes

Parallel I/O Subsystem

Concurrent Access by Multiple Compute Nodes
  • No concurrent access overheads
  • No performance degradation
  • When the number of compute nodes increases

Emin Gabrielyan, Three Topics in Parallel Communications

scalable throughput of the parallel i o subsystem
Scalable throughput of the parallel I/O subsystem
  • The overall parallel I/O throughput should increase linearly as the number of I/O nodes increases

Throughput

Number of I/O Nodes

Parallel I/O Subsystem

Emin Gabrielyan, Three Topics in Parallel Communications

concurrency and scalability scalable all to all communication
Concurrency and Scalability = Scalable All-to-All Communication

Compute Nodes

  • Concurrency and Scalability (as the number of I/O nodes increases) can be represented by scalable overall throughput when the number of compute and I/O nodes increases

All-to-All Throughput

Number of I/O and Compute Nodes

I/O Nodes

Emin Gabrielyan, Three Topics in Parallel Communications

how parallelism is achieved
How parallelism is achieved?
  • Split the logical file into stripes
  • Distribute the stripes cyclically across the subfiles

Logical file

file2

file3

Subfiles

file1

file4

file6

file5

Emin Gabrielyan, Three Topics in Parallel Communications

impact of the stripe unit size on the load balance
Impact of the stripe unit size on the load balance

I/O Request

  • When the stripe unit size is large there is no guarantee that an I/O request will be well parallelized

Logical file

subfiles

Emin Gabrielyan, Three Topics in Parallel Communications

fine granularity striping with good load balance
Fine granularity striping with good load balance

I/O Request

  • Low granularity ensures good load balance and high level of parallelism
  • But results in high network communication and disk access cost

Logical file

subfiles

Emin Gabrielyan, Three Topics in Parallel Communications

fine granularity striping is to be maintained
Fine granularity striping is to be maintained
  • Most of the HPC parallel I/O solutions are optimized only for large I/O blocks (order of Megabytes)
  • But we focus on maintaining fine granularity
  • The problem of the network communication and disk access are addressed by dedicated optimizations

Emin Gabrielyan, Three Topics in Parallel Communications

overview of the implemented optimizations
Overview of the implemented optimizations
  • Disk access requests aggregation (sorting, cleaning-overlaps and merging)
  • Network communication aggregation
  • Zero-copy streaming between network and fragmented memory patterns (MPI derived datatypes)
  • Support of the multi-block interface efficiently optimizes application related file and memory fragmentations (MPI-I/O)
  • Overlapping of network communication with disk access in time (at the moment write operation only)

Emin Gabrielyan, Three Topics in Parallel Communications

disk access optimizations
Disk access optimizations
  • Sorting
  • Cleaning the overlaps
  • Merging
  • Input: striped user I/O requests
  • Output: optimized set of I/O requests
  • No data copy

Multi-block I/O request

block 1

bk. 2

block 3

6 I/O access requests are merged into 2

access1

access2

Local subfile

Emin Gabrielyan, Three Topics in Parallel Communications

network communication aggregation without copying
Network Communication Aggregation without Copying

From: application memory

  • Striping across 2 subfiles
  • Derived datatypes on the fly
  • Contiguous streaming

Logical file

To: remote I/O nodes

Remote I/O node 1

Remote I/O node 2

Emin Gabrielyan, Three Topics in Parallel Communications

optimized throughput as a function of the stripe unit size
Optimized throughput as a function of the stripe unit size
  • 3 I/O nodes
  • 1 compute node
  • Global file size: 660 Mbytes
  • TNET
  • About 10 MB/s per disk

Emin Gabrielyan, Three Topics in Parallel Communications

all to all stress test on swiss tx cluster supercomputer
All-to-all stress test on Swiss-Tx cluster supercomputer
  • Stress test is carried out on Swiss-Tx machine
  • 8 full crossbar 12-port TNet switches
  • 64 processors
  • Link throughput is about 86 MB/s

Swiss-Tx supercomputer in June 2001

Emin Gabrielyan, Three Topics in Parallel Communications

all to all stress test on swiss tx cluster supercomputer1
All-to-all stress test on Swiss-Tx cluster supercomputer
  • Stress test is carried out on Swiss-Tx machine
  • 8 full crossbar 12-port TNet switches
  • 64 processors
  • Link throughput is about 86 MB/s

Emin Gabrielyan, Three Topics in Parallel Communications

sfio on the swiss tx cluster supercomputer
SFIO on the Swiss-Tx cluster supercomputer
  • MPI-FCI
  • Global file size: up to 32 GB
  • Mean of 53 measurements for each number of nodes
  • Nearly linear scaling with 200 bytes stripe unit !
  • Network is a bottleneck above 19 nodes

Emin Gabrielyan, Three Topics in Parallel Communications

liquid scheduling for low latency circuit switched networks

Liquid scheduling for low-latency circuit-switched networks

Reaching liquid throughput in HPC wormhole switching and in Optical lightpath routing networks

Emin Gabrielyan, Three Topics in Parallel Communications

upper limit of the network capacity
Upper limit of the network capacity
  • Given is a set of parallel transmissions
  • and a routing scheme
  • The upper limit of network’s aggregate capacity is its liquid throughput

Emin Gabrielyan, Three Topics in Parallel Communications

distinction packet switching versus circuit switching
Distinction: Packet Switching versus Circuit Switching
  • Packet switching is replacing circuit switching since 1970 (more flexible, manageable, scalable)

Emin Gabrielyan, Three Topics in Parallel Communications

distinction packet switching versus circuit switching1
Distinction: Packet Switching versus Circuit Switching
  • New circuit switching networks are emerging
  • In HPC, wormhole routing aims at extremely low latency
  • In optical network packet switching is not possible due to lack of technology

Emin Gabrielyan, Three Topics in Parallel Communications

coarse grained networks

Message Sink

Message Source

Coarse-Grained Networks
  • In circuit switching the large messages are transmitted entirely (coarse-grained switching)
  • Low latency
  • The sink starts receiving the message as soon as the sender starts transmission

Fine-Grained Packet switching

Coarse-grained Circuit switching

Emin Gabrielyan, Three Topics in Parallel Communications

parallel transmissions in coarse grained networks
Parallel transmissions in coarse-grained networks
  • When the nodes transmit in parallel across a coarse-grained network in uncoordinated fashion congestion may occur
  • The resulting throughput can be far below the expected liquid throughput

Emin Gabrielyan, Three Topics in Parallel Communications

congestions and blocked paths in wormhole routing
Congestions and blocked paths in wormhole routing

Source3

  • When the message encounters a busy outgoing port it waits
  • The previous portion of the path remains occupied

Sink2

Source1

Source2

Sink1

Sink3

Emin Gabrielyan, Three Topics in Parallel Communications

hardware solution in virtual cut through routing
Hardware solution in Virtual Cut-Through routing

Source3

  • In VCT when the port is busy
  • The switch buffers the entire message
  • Much more expensive hardware than in wormhole switching

Sink2

Source1

buffering

Source2

Sink1

Sink3

Emin Gabrielyan, Three Topics in Parallel Communications

application level coordinated liquid scheduling
Application level coordinated liquid scheduling
  • Hardware solutions are expensive
  • Liquid scheduling is a software solution
  • Implemented at the application level
  • No investments in network hardware
  • Coordination between the edge nodes and knowledge of the network topology is required

Emin Gabrielyan, Three Topics in Parallel Communications

example of a simple traffic pattern
Example of a simple traffic pattern
  • 5 sending nodes (above)
  • 5 receiving nodes (below)
  • 2 switches
  • 12 links of equal capacity
  • Traffic consist of 25 transfers

Emin Gabrielyan, Three Topics in Parallel Communications

round robin schedule of all to all traffic pattern
Round robin schedule of all-to-all traffic pattern
  • First, all nodes simultaneously send the message to the node in front
  • Then, simultaneously, to the next node
  • etc

Emin Gabrielyan, Three Topics in Parallel Communications

throughput of round robin schedule
Throughput of round-robin schedule
  • 3rd and 4th phases require each two timeframes
  • 7 timeframes are needed in total
  • Link throughput = 1Gbps
  • Overall throughput = 25/7x1Gbps = 3.57Gbps

Emin Gabrielyan, Three Topics in Parallel Communications

a liquid schedule and its throughput
A liquid schedule and its throughput
  • 6 timeframes of non-congesting transfers
  • Overall throughput = 25/6x1Gbps = 4.16Gbps

Emin Gabrielyan, Three Topics in Parallel Communications

optimization by first retrieving the teams of the skeleton
Optimization by first retrieving the teams of the skeleton
  • Speedup: by skeleton optimization
  • Reducing the search space 9.5 times

Emin Gabrielyan, Three Topics in Parallel Communications

liquid schedule construction speed with our algorithm
Liquid schedule construction speed with our algorithm
  • 360 traffic patterns across Swiss-Tx network
  • Up to 32 nodes
  • Up to 1024 transfers
  • Comparison of our optimized construction algorithm with MILP method (optimized for discrete optimization problems)

Emin Gabrielyan, Three Topics in Parallel Communications

carrying real traffic patterns according to liquid schedules
Carrying real traffic patterns according to liquid schedules
  • Swiss-Tx supercomputer cluster network is used for testing aggregate throughputs
  • Traffic patterns are carried out according liquid schedules
  • Compare with topology-unaware round robin or random schedules

Emin Gabrielyan, Three Topics in Parallel Communications

theoretical liquid and round robin throughputs of 362 traffic samples
Theoretical liquid and round-robin throughputs of 362 traffic samples
  • 362 traffic samples across Swiss-Tx network
  • Up to 32 nodes
  • Traffic carried out according to round robin schedule reaches only 1/2 of the potential network capacity

Emin Gabrielyan, Three Topics in Parallel Communications

throughput of traffic carried out according liquid schedules
Throughput of traffic carried out according liquid schedules
  • Traffic carried out according to liquid schedule practically reaches the theoretical throughput

Emin Gabrielyan, Three Topics in Parallel Communications

liquid scheduling conclusions application optimization speedup
Liquid scheduling conclusions: application, optimization, speedup
  • Liquid scheduling: relies on network topology and reaches the theoretical liquid throughput of the HPC network
  • Liquid schedules can be constructed in less than 0.1 sec for traffic patterns with 1000 transmissions (about 100 nodes)
  • Future work: dynamic traffic patterns and application in OBS

Emin Gabrielyan, Three Topics in Parallel Communications

fault tolerant streaming with capillary routing

Fault-tolerant streaming with Capillary-routing

Path diversity and Forward Error Correction codes at the packet level

Emin Gabrielyan, Three Topics in Parallel Communications

structure of my talk
Structure of my talk
  • The advantages of packet level FEC in Off-line streaming
  • Solving the difficulties of Real-time streaming by multi-path routing
  • Generating multi-path routing patterns of various path diversity
  • Level of the path diversity and the efficiency of the routing pattern for real-time streaming

Emin Gabrielyan, Three Topics in Parallel Communications

decoding a file with digital fountain codes

Decoding a file with Digital Fountain Codes
  • A file is divided into packets
  • Digital fountain code generates numerous checksum packets
  • Sufficient quantity of any checksum packets recovers the file
  • Like when filling your cup only collecting a sufficient amount of drops matters

Emin Gabrielyan, Three Topics in Parallel Communications

transmitting large files without feedback across lossy networks using digital fountain codes
Transmitting large files without feedback across lossy networks using digital fountain codes
  • Sender transmits the checksum packets instead of the source packets
  • Interruptions cause no problems
  • The file is recovered once a sufficient number of packets is delivered
  • FEC in off-line streaming relies on time stretching

Emin Gabrielyan, Three Topics in Parallel Communications

in real time streaming the receiver play back buffering time is limited
In Real-time streaming the receiver play-back buffering time is limited
  • While in off-line streaming the data can be hold in the receiver buffer …
  • In real-time streaming the receiver is not permitted to keep data too long in the playback buffer

Emin Gabrielyan, Three Topics in Parallel Communications

long failures on a single path route
Long failures on a single path route
  • If the failures are short, by transmitting a large number of FEC packets, receiver may constantly have in time a sufficient number of checksum packets
  • If the failure lasts longer than the playback buffering limit, no FEC can protect the real-time communication

Emin Gabrielyan, Three Topics in Parallel Communications

applicability of fec in real time streaming by using path diversity

Path diversity

Applicability of FEC in Real-Time streaming by using path diversity
  • Losses can be recovered by extra packets:
  • received later (in off-line streaming)
  • received via another path (in real-time streaming)
  • Path diversity replaces time-stretching

Reliable real-Time streaming

Playback buffer limit

Reliable Off-line streaming

Time stretching

Real-time streaming

Emin Gabrielyan, Three Topics in Parallel Communications

creating an axis of multi path patterns

Single path routing

Multi-path routing

Multi-path routing

Multi-path routing

Creating an axis of multi-path patterns
  • Intuitively we imagine the path diversity axis as shown
  • High diversity decreases the impact of individual link failures, but uses much more links, increasing the overall failure probability
  • We must study many multi-path routings patterns of different diversity in order to answer this question

Path diversity

Emin Gabrielyan, Three Topics in Parallel Communications

capillary routing creates solutions with different level of path diversity

Path diversity

Capillary routing creates solutions with different level of path diversity
  • As a method for obtaining multi-path routing patterns of various path diversity we relay on capillary routing algorithm
  • For any given network and pair of nodes capillary routing produces layer by layer routing patterns of increasing path diversity

= Layer of Capillary Routing

Emin Gabrielyan, Three Topics in Parallel Communications

capillary routing first layer

Reduce the maximal load of all links

Capillary routing – first layer
  • First take the shortest path flow and minimize the maximal load of all links
  • This will split the flow over a few parallel routes

Emin Gabrielyan, Three Topics in Parallel Communications

capillary routing second layer

Reduce the load of the remaining links

Capillary routing – second layer
  • Then identify the bottleneck links of the first layer
  • And minimize the flow of the remaining links
  • Continue similarly, until the full routing pattern is discovered layer by layer

Emin Gabrielyan, Three Topics in Parallel Communications

capillary routing layers
Capillary Routing Layers
  • Single network [1]
  • 4 routing patterns
  • Increasing path diversity

Emin Gabrielyan, Three Topics in Parallel Communications

application model evaluating the efficiency of path diversity

redundant packets

source packets

FEC block

Application model: evaluating the efficiency of path diversity
  • To evaluate the efficiencies of patterns with different path diversities we rely on an application model where:
  • The sender uses a constant amount of FEC checksum packets to combat weak losses and
  • The sender dynamically increases the number of FEC packets in case of serious failures

Emin Gabrielyan, Three Topics in Parallel Communications

strong fec codes are used in case of serious failures
Strong FEC codes are used in case of serious failures
  • When the packet loss rate observed at the receiver is below the tolerable limit, the sender transmits at its usual rate
  • But when the packet loss rate exceeds the tolerable limit, the sender adaptively increases the FEC block size by adding more redundant packets

Packet Loss Rate = 30%

Packet Loss Rate = 3%

Emin Gabrielyan, Three Topics in Parallel Communications

redundancy overall requirement
Redundancy Overall Requirement
  • The overall amount of dynamically transmitted redundant packets during the whole communication time is proportional:
  • to the duration of communication and the usual transmission rate
  • to a single link failure frequency and its average duration
  • and to a coefficient characterizing the given multi-path routing pattern (analytical equation)

Emin Gabrielyan, Three Topics in Parallel Communications

ror as a function of diversity

60

55

50

3.3%

45

40

3.9%

35

4.5%

30

Average ROR rating

5.1%

25

6.3%

20

15

7.5%

10

5

0

layer7

layer8

layer9

layer1

layer3

layer4

layer5

layer6

layer2

layer10

capillarization

ROR as a function of diversity
  • Here is ROR as a function of the capillarization level
  • It is an average function over 25 different network samples (obtained from MANET)
  • The constant tolerance of the streaming is 5.1%
  • Here is ROR function for a stream with a static tolerance of 4.5%
  • Here are ROR functions for static tolerances from 3.3% to 7.5%

Emin Gabrielyan, Three Topics in Parallel Communications

ror rating over 200 network samples
ROR rating over 200 network samples
  • ROR coefficients for 200 network samples
  • Each section is the average for 25 network samples
  • Network samples are obtained from random walk MANET
  • Path diversity obtained by capillary routing reduces the overall amount of FEC packets

Emin Gabrielyan, Three Topics in Parallel Communications

conclusions
Conclusions
  • Although strong path diversity increases the overall failure rate,
  • Combined with erasure resilient codes
  • High diversity of main paths
  • and sub-paths is beneficiary for real-time streaming (except a few pathological cases)
  • With multi-path routing patterns real-time applications can have great advantages from application of FEC
  • Future work: using overly network to achieve a multi-path communication flow for VOIP over public Internet
  • Considering coding also inside network, not only at the edges for energy saving in MANET

Emin Gabrielyan, Three Topics in Parallel Communications

thank you
Thank you!
  • Publications related to parallel I/O
  • [Gennart99] Benoit A. Gennart, Emin Gabrielyan, Roger D. Hersch, “Parallel File Striping on the Swiss-Tx Architecture”, EPFL Supercomputing Review 11, November 1999, pp. 15-22
  • [Gabrielyan00G] Emin Gabrielyan, “SFIO, Parallel File Striping for MPI-I/O”, EPFL Supercomputing Review 12, November 2000, pp. 17-21
  • [Gabrielyan01B] Emin Gabrielyan, Roger D. Hersch, “SFIO a striped file I/O library for MPI”, Large Scale Storage in the Web, 18th IEEE Symposium on Mass Storage Systems and Technologies, 17-20 April 2001, pp. 135-144
  • [Gabrielyan01C] Emin Gabrielyan, “Isolated MPI-I/O for any MPI-1”, 5th Workshop on Distributed Supercomputing: Scalable Cluster Software, Sheraton Hyannis, Cape Cod, Hyannis Massachusetts, USA, 23-24 May 2001
  • Conference papers on liquid scheduling problem
  • [Gabrielyan03] Emin Gabrielyan, Roger D. Hersch, “Network Topology Aware Scheduling of Collective Communications”, ICT’03 - 10th International Conference on Telecommunications, Tahiti, French Polynesia, 23 February - 1 March 2003, pp. 1051-1058
  • [Gabrielyan04A] Emin Gabrielyan, Roger D. Hersch, “Liquid Schedule Searching Strategies for the Optimization of Collective Network Communications”, 18th International Multi-Conference in Computer Science & Computer Engineering, Las Vegas, USA, 21-24 June 2004, CSREA Press, vol. 2, pp. 834-848
  • [Gabrielyan04B] Emin Gabrielyan, Roger D. Hersch, “Efficient Liquid Schedule Search Strategies for Collective Communications”, ICON’04 - 12th IEEE International Conference on Networks, Hilton, Singapore, 16-19 November 2004, vol. 2, pp 760-766
  • Papers related to capillary routing
  • [Gabrielyan06A] Emin Gabrielyan, “Fault-tolerant multi-path routing for real-time streaming with erasure resilient codes”, ICWN’06 - International Conference on Wireless Networks, Monte Carlo Resort, Las Vegas, Nevada, USA, 26-29 June 2006, pp. 341-346
  • [Gabrielyan06B] Emin Gabrielyan, Roger D. Hersch, “Rating of Routing by Redundancy Overall Need”, ITST’06 - 6th International Conference on Telecommunications, June 21-23, 2006, Chengdu, China, pp. 786-789
  • [Gabrielyan06C] Emin Gabrielyan, “Fault-Tolerant Streaming with FEC through Capillary Multi-Path Routing”, ICCCAS’06 - International Conference on Communications, Circuits and Systems, Guilin, China, 25-28 June 2006, vol. 3, pp. 1497-1501
  • [Gabrielyan06D] Emin Gabrielyan, Roger D. Hersch, “Reducing the Requirement in FEC Codes via Capillary Routing”, ICIS-COMSAR’06 - 5th IEEE/ACIS International Conference on Computer and Information Science, 10-12 July 2006, pp. 75-82
  • [Gabrielyan06E] Emin Gabrielyan, “Reliable Multi-Path Routing Schemes for Real-Time Streaming”, ICDT06, International Conference on Digital Telecommunications, August 29 - 31, 2006, Cap Esterel, Côte d’Azur, France

Emin Gabrielyan, Three Topics in Parallel Communications