1 / 10

Optimal Sampling Strategies for Multiscale Stochastic Processes

Optimal Sampling Strategies for Multiscale Stochastic Processes. Vinay Ribeiro Rolf Riedi, Rich Baraniuk (Rice University). Motivation. probe packets. 0 T. Global (space/time) average. Limited number of local samples.

phyland
Download Presentation

Optimal Sampling Strategies for Multiscale Stochastic Processes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Optimal Sampling Strategies for Multiscale Stochastic Processes Vinay Ribeiro Rolf Riedi, Rich Baraniuk (Rice University)

  2. Motivation probe packets 0 T Global (space/time) average Limited number of local samples • Probing for RTT (ping, TCP), available bandwidth (pathload, pathChirp) • Packet trace collection • Traffic matrix estimation, overall traffic composition • Routing/Connectivity analysis • Sample few routing tables • Sensor networks • deploy limited number of sensors How to optimally place N samples to estimate the global quantity?

  3. Multiscale Stochastic Processes root • Nodes at higher scales – averages over larger regions • Powerful structure – model LRD traffic, image data, natural phenomena • root – global average, leaves – local samples • Choose Nleaf nodes to give best linear estimate (in terms of mean squared error) of root node • Bunched, uniform, exponential? Scale j leaves Quad-tree

  4. Independent Innovations Trees split N • Each node is linear combination of parent and independent random innovation • Recursive top-to-bottom algorithm • Concave optimization for split at each node • Polynomial time algorithm O(N x depth + (# tree nodes)) • Uniformly spaced leaves are optimal if innovations i.i.d. within scale N-n n

  5. Covariance Trees • Distance : Two leaf nodes have distance j if their lowest common ancestor is at scale j • Covariance tree : Covariance between leaf nodes with distance j is cj(only a function of distance), covariance between root and any leaf node is constant,  • Positively correlation progression: cj>cj+1 • Negatively correlation progression: cj<cj+1

  6. Covariance Tree Result • Optimality proof:Simply construct an independent innovations tree with similar correlation structure • Worst case proof: Based on eigenanalysis

  7. Numerical Results • Covariance trees with fractional Gaussian noise correlation structure • Plots of normalized MSE vs. number of leaves for different leaf patterns Negative correlation progression Positive correlation progression

  8. Future Directions • Sampling • more general tree structures • non-linear estimates • non-tree stochastic processes • leverage related work in Statistics (Bellhouse et al) • Internet Inference • how to determine correlation between traffic traces, routing tables etc. • Sensor networks • jointly optimize with other constraints like power transmission

  9. Water-Filling • : arbitrary set of leaf nodes; : size of X • : leaves on left, : leaves on right • : linear min. mean sq. error of estimating root using X 0 1 2 4 3 N= • Repeat at next lower scale with N • replaced by l*N(left) and (N-l*N) (right) • Result: If innovations identically • distributed within each scale then • uniformly distribute leaves, l*N=b N/2 c fL(l) fR(l) 0 1 2 3 4 0 1 2 3 4

  10. Covariance Tree Result • Result:For a positive correlation progresssion choosing leaf nodes uniformly in the tree is optimal. However, for negatively correlation progression this same uniform choice is the worst case! • Optimality proof:Simply construct an independent innovations tree with similar correlation structure • Worst case proof: The uniform choice maximizes sum of elements of SX Using eigen analysis show that this implies that uniform choice minimizes sum of elements of S-1X

More Related