- 99 Views
- Uploaded on
- Presentation posted in: General

Richard Baraniuk, Volkan Cevher Rice University Ron DeVore Texas A&M University Martin Wainwright

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

New Theory and Algorithms

for ScalableData Fusion

Richard Baraniuk, Volkan Cevher

Rice University

Ron DeVore

Texas A&M University

Martin Wainwright

University of California-Berkeley

Michael Wakin

Colorado School of Mines

Goals

- sense
- communicate
- fuse
- infer (detect, recognize, etc.)
- predict
- actuate/navigate

networkinfrastructure

humanintelligence

Challenges

- growing volumes of sensor data
- increasingly diverse data
- diverse and changing operating conditions
- increasing mobility

networkinfrastructure

humanintelligence

- Shear amount of data that must be acquired, communicated, processed
J sensors

N samples/pixels per sensor

- Amount of data grows as O(JN)
- can lead to communication and computation collapse

- Must fuse diverse data types

- Thrust 1: Scalable data models
- Thrust 2: Randomized dimensionality reduction
- Thrust 3: Scalable inference algorithms
- Thrust 4: Scalable data fusion
- Thrust 5: Scalable learning algorithms

- Unifying theme: low-dimensional signal structure
- Sparse signal models
- Graphical models
- Manifold models

pixels

largewaveletcoefficients

(blue = 0)

K-dim subspaces

- Image articulation manifold (IAM)
- Manifold dimensionL=# imaging parameters
- If images are smooththen manifold is smooth

articulation parameter space

- Goal: preserve information from x in y
- One avenue: stable embedding
- Key question: how small can M be?

signalfromsparse,graphical,manifoldmodel

measurements

K-dim subspaces

K-dim subspaces

- Stable embedding <> Restricted isometry property (RIP) from compressive sensing
- Stability whp if

M randomizedmeasurements

N mirrors

target N=65536 pixels

M=1300 measurements (2%)

M=11000 measurements (16%)

K-dim subspaces

- Example: K-sparse signals

- Example: K-sparse signals with correlations
- Rules out some/many subspaces
- Stability whp with as low as

K-dim subspaces

- Model clustering of significant pixelsin space domain using Ising Markov Random Field
- Example: Recovery of background subtracted video from randomized measurements

target

Ising-modelrecovery

CoSaMPrecovery

LP (FPC)recovery

- Can stably embed a compact, smooth L-dimensional manifold whp if
- Recall that manifold dimension L is very small for many apps (# imaging parameters)
- Constants scale with manifold’s
- condition number (curvature)
- volume

Many applications involve signal inferenceand not reconstructiondetection < classification < estimation < reconstruction

Good news:RDR supports efficient learning, inference, processing directly on compressive measurements

Random projections ~ sufficient statisticsfor signals with concise geometrical structure

Simple object classification problem

AWGN: nearest neighbor classifier

Common issue:

L unknown articulation parameters

Common solution: matched filter

find nearest neighbor under all articulations

Classification with L unknown articulation parameters

Images are points in

Classify by finding closesttarget template to datafor each class

distance or inner product

data

target templatesfromgenerative modelor training data (points)

Detection/classification with L unknown articulation parameters

Images are points in

Classify by finding closesttarget template to data

As template articulationparameter changes, points map out a L-dimnonlinear manifold

Matched filter classification = closest manifold search

data

articulation parameter space

Recall stable manifoldembedding whp using

random measurements

Enables parameter estimation and MFdetection/classificationdirectly on randomizedmeasurements

recall L very small in many applications (# articulations)

Naïve approach

take M CS measurements,

recover N-pixel image from CS measurements (expensive)

conventional matched filter

Worldly approach

take M CS measurements,

matched filter directly on CS measurements(inexpensive)

Random shift and rotation (L=3 dim. manifold)

WG noise added to measurements

Goals:identify most likely shift/rotation parameters identify most likely class

more noise

classification rate (%)

avg. shift estimate error

more noise

number of measurements M

number of measurements M

- Sparse signal models
- multi-signal sparse models [Wakin, next talk]

- Manifold models
- joint manifold models [next]

- Graphical models

- Example: Network of J cameras observing an articulating object
- Each camera’s images lie on L-dim manifold in
- How to efficiently fuse imagery from J cameras to solve an inference problem while minimizing network communication?

- Fusion:stack corresponding image vectors taken at the same time
- Fused images still lie on L-dim manifold in“joint manifold”

- Given submanifolds
- L-dimensional
- homeomorphic (we can continuously map between any pair)

- Define joint manifoldas concatenation of

- Joint manifold inherits properties from component manifolds
- compactness
- smoothness
- volume:
- condition number ( ):

- Translate into algorithm performance gains
- Bounds are often loose in practice (good news)

- Can take randomized measurements of stacked images and process or make inferences

w/ unfused RDR

w/ unfused and no RDR

- Can compute randomized measurements in-place
- ex: as we transmit to collection/processing point

- J=3 CS cameras, each N=320x240 resolution
- M=200 random measurements per camera
- Two classes
- truck w/ cargo
- truck w/ no cargo

- Goal: classify a test image

class 1

class 2

- J=3 CS cameras, each N=320x240 resolution
- M=200 random measurements per camera
- Two classes
- truck w/ cargo
- truck w/ no cargo

- Smashed filtering
- independent
- majority vote
- joint manifold

Joint Manifold

manifold learnedfrom data

manifold learnedfrom RDR

joint manifold learned from data

joint manifold learned from RDR

- Sparse signal models
- learning new sparse dictionaries

- Manifold models
- Manifold lifting [Wakin, next talk]
- Manifold learning as high-dimensional function estimation [DeVore]

- Graphical model learning

- Learn Gaussian graphical model by learning inverse covariance matrix [Wainwright]
- Learn best fitting sparse model (in term of number of edges) via L1 optimization
- Provably consistent

- Re-think data acquisition/processing pipeline
- Exploit low-dimensional geometrical structure of
- sparse signal models
- graphical signal models
- manifold signal models

- Scalable algorithms via randomized dim. reduction
- Progress to date:
- multi-signal sparse models
- smashed filter for inference
- joint manifold model for fusion
- manifold lifting
- graphical model learning

dsp.rice.edu