alaskaview proof of concept test n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
AlaskaView Proof-of-Concept Test PowerPoint Presentation
Download Presentation
AlaskaView Proof-of-Concept Test

Loading in 2 Seconds...

play fullscreen
1 / 12

AlaskaView Proof-of-Concept Test - PowerPoint PPT Presentation


  • 81 Views
  • Uploaded on

AlaskaView Proof-of-Concept Test. Tom Heinrichs, UAF/GINA/ION Grant Mah USGS/EDC Mike Rechtenbaugh USGS/EDC Jeff Harrison UA/ARSC. Proof-of-Concept Test Goals. Receive Landsat 7 data using Alaska SAR Facilities’ (ASF) existing 11-meter antenna, receiver, and bitsync

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'AlaskaView Proof-of-Concept Test' - mira


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
alaskaview proof of concept test

AlaskaView Proof-of-Concept Test

Tom Heinrichs, UAF/GINA/ION

Grant Mah USGS/EDC

Mike Rechtenbaugh USGS/EDC

Jeff Harrison UA/ARSC

Intenet2 AmericaView

TAH Oct 28, 2002

proof of concept test goals
Proof-of-Concept Test Goals
  • Receive Landsat 7 data using Alaska SAR Facilities’ (ASF) existing 11-meter antenna, receiver, and bitsync
  • Capture raw signal data to disk using EDC-supplied data capture system.
  • Transfer raw data over Internet2 to USGS EROS Data Center (EDC) for processing
  • Pick up finished image products for public distribution by Geographic Information Network of Alaska (GINA) servers
  • Turn L7 scene around in less than 36 hours
    • Testing done in August 2002.

Intenet2 AmericaView

TAH Oct 28, 2002

overview
Overview

GINA

Data Servers

EDC/GINA

Data Capture

System

Alaska

User

Alaska SAR Facility

Antenna and Receiver

EDC

Ingest Server

EDC Landsat 7 Processor

EDC FTP Server

http

ftp

NASA/USGS

Landsat 7

ETM+

Internet-2

bbftp

ftp

Raw

I & Q

Level

1G

EROS Data Center, Sioux Falls, South Dakota

Intenet2 AmericaView

TAH Oct 28, 2002

example of l7 scene acquired during the test

Near Provideniya, Russian Far East

Example of L7 Scene Acquired During the Test

Time from reception to completed pickup: 26 hours

Success: turn-around in less than 36 hour standard.

August 15, 2002 Path 85 Row 15

Intenet2 AmericaView

TAH Oct 28, 2002

bbftp transfers uaf to edc
bbftp Transfers UAF to EDC
  • Average transfer rate:
    • 60 Mbits/second
    • 26.3 GBytes/hour
  • Average file size transferred
    • 4.2 GB
    • range 0.4 to 5.5 GB

Intenet2 AmericaView

TAH Oct 28, 2002

comparisons
Comparisons
  • bbftp: 6 Mbps per stream
  • ftp pickup from EDC: 1.7 Mbps (Sun ftp client)
  • ftp on LAN: 82 Mbps (100Mbps interface)
  • Host stacks (window sizes) are limiting factor on WAN

Intenet2 AmericaView

TAH Oct 28, 2002

iperf testing
iperf testing
  • 85 Mbits/sec maximum rate
  • 100 Mbps LAN interface was the bottleneck
  • 4 to 6 Mbps per stream

Intenet2 AmericaView

TAH Oct 28, 2002

routes
Routes
  • RTT = 110 ms
  • Traverses multiple high-speed networks: VBNS+, Abeline, Gigapop

[uaftest@edclxw50 uaftest]$ /usr/sbin/traceroute ctsdev1.gina.alaska.edu

traceroute to ctsdev1.gina.alaska.edu (137.229.79.81), 30 hops max, 38 byte packets

1 fe00-72a-edc.cr.usgs.gov (152.61.128.254) 0.700 ms 0.565 ms 0.514 ms

2 edcnsbp1.cr.usgs.gov (152.61.213.1) 0.443 ms 0.386 ms 0.348 ms

3 edcnsbp1.cr.usgs.gov (152.61.213.1) 0.362 ms 0.369 ms 0.354 ms

4 vbp1-24a-edc.cr.usgs.gov (152.61.213.10) 0.495 ms 0.403 ms 0.388 ms

5 ext-edc-100-1.cr.usgs.gov (152.61.100.1) 0.674 ms 0.629 ms 0.593 ms

6 jn1-at1-1-0-2025.dng.vbns.net (166.61.9.13) 13.061 ms 13.105 ms 12.788 ms

7 Abilene.dng.vbns.net (166.61.8.58) 17.452 ms 17.169 ms 17.287 ms

8 kscy-ipls.abilene.ucaid.edu (198.32.8.5) 26.624 ms 26.651 ms 26.698 ms

9 dnvr-kscy.abilene.ucaid.edu (198.32.8.13) 37.343 ms 37.333 ms 54.059 ms

10 sttl-dnvr.abilene.ucaid.edu (198.32.8.49) 65.565 ms 66.058 ms 65.581 ms

11 hnsp1-wes-so-5-0-0-0.pnw-gigapop.net (198.48.91.77) 66.115 ms 65.942 ms 65.653 ms

12 core1-wes-ge-0-0-0-0.pnw-gigapop.net (198.107.150.119) 66.066 ms 65.937 ms 66.084 ms

13 core1-ua-so-1-1-0-0.pnw-gigapop.net (198.107.144.86) 110.430 ms 110.587 ms 110.061 ms

14 198.32.40.132 (198.32.40.132) 121.736 ms 116.928 ms 110.173 ms

15 uafrr (137.229.2.3) 110.350 ms 110.559 ms 110.397 ms

16 ctsdev1.gina.alaska.edu (137.229.79.81) 110.062 ms 110.235 ms 110.298 ms

Intenet2 AmericaView

TAH Oct 28, 2002

host stacks general
Host Stacks -- General
  • TCP send and receive window (buffer) sizes critical for high latency connections
  • Send and receive buffers have both a default value and a maximum value which limits the size an application such as bbftp or iperf can request
  • Default size affects memory usage. Too large of a default will consume excessive memory
    • Total default usage= Number of connections * Default buffer size
  • Optimum: Window = Bandwidth * Round Trip Time
  • Currently must be set by hand.
  • Eagerly awaiting self-tuning stacks running on all OS’s, including Sun Solaris and SGI IRIX.

Intenet2 AmericaView

TAH Oct 28, 2002

host stacks for test
Host Stacks – For Test
  • On SGI data capture system: send and receive buffers set to default of 60 kB (max 512kB)
  • On Linux ingest server: send and receive buffers set to default of 64kB (max 64 kB)
  • On Sun ftp client: default receive buffer set to 24kB (max 1024kB)
  • Round trip time (RTT) measured by traceroute: 110ms
  • Bandwidth = Window / RTT
    • 64 kB / 110 ms = 4.5 Mbits/sec
    • 24 kB / 110 ms = 1.7 Mbits/sec (as observed during product pickup)

Intenet2 AmericaView

TAH Oct 28, 2002

lessons learned
Lessons Learned
  • Brute force (multiple streams) can overcome stack tuning issues
  • Pay more attention to stack tuning at outset
  • Examine window sizes with tcpdump to get the real story
  • AlaskaView application enabled by high speed research networks including Internet2, PNW Gigapop, and VBNS+.

Intenet2 AmericaView

TAH Oct 28, 2002

credits
Credits
  • USGS EROS Data Center
    • Grant Mah and Jason Williams on-site
    • Tremendous support from EDC staff across the entire effort: Engineering, IS support, Network support, User services, Management
  • UAF Alaska SAR Facility
    • Mike Stringer primary on-site
    • Brett Delana, Dayne Broderson, Carel Lane, and the ASF Operations staff
  • Alaska Ground Station
    • Equipment support coordinated by Richard Franchek

Intenet2 AmericaView

TAH Oct 28, 2002