1 / 12

Kickoff

Kickoff. Kickoff. Value o f Information. ARO MURI on Value-centered Information Theory for Adaptive Learning, Inference, Tracking, and Exploitation. ARO MURI on Value-centered Information Theory for Adaptive Learning, Inference, Tracking, and Exploitation. VOI.

walt
Download Presentation

Kickoff

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Kickoff Kickoff Value of Information ARO MURI on Value-centered Information Theory for Adaptive Learning, Inference, Tracking, and Exploitation ARO MURI on Value-centered Information Theory for Adaptive Learning, Inference, Tracking, and Exploitation VOI Information Value in Registration and Sensor Management Doug Cochran Arizona State University ARO/OSD MURI Review UCLA 28 October 2012 Joint work with Steve Howard, Utku Ilkturk, Bill Moran, and Rodrigo Platte 1st year review. UCLA 2012 1st year review. UCLA 2012

  2. Kickoff Kickoff Value of Information Summary of 2012 Research Thrusts VOI • Gauge-invariant estimation in networks In parameter estimation with a sensor network, which links contribute most? • Sensor Management via Riemannian geometry The metric structure induced on a parameter manifold by the Fisher information in estimation problems provides an approach to managing sensor configurations 3. Measurement selection for observability (started in collaboration with ARL) What are the considerations when deciding which linear measurement map to choose from a library at each stage of an iterative observation problem? 1st year review. UCLA 2012

  3. Kickoff !6 !5 !3 !1 !2 !4 !7 !8 Gauge-invariant Estimation in Networks Tenets VOI • The purpose of sensor networks is to sense; i.e., to enable detection, estimation, classification, and tracking • The value of network infrastructure is to enable sharing and fusion of data collected at different sensors • To exploit this, data at the nodes must be registered • Intrinsic data; e.g., clocks, platform orientation • Extrinsic data: collected by sensors • Can we quantify the value of adding particular links in terms that are meaningful to the sensing mission?

  4. Gauge-invariant Estimation in Networks Registration on a Graph • Network graph: Directed graph G on a vertex set V(G) with edges E(G) • Vertex labels: An element of a Lie group G is associated with each node of G • Edge labels: An element of G on each edge e ϵE(G) representing a noisy measurement of the difference of the values on the target vertex t(e) and the source vertex s(e) • Goal: Estimate the connection – the true relative offsets between the vertex values • The state of the network is a G-valued function x on V(G) but it is never directly observed • If the network were aligned, the function would be constant (flat connection) • What is observed is the connection wϵ G|E(G)| relative to the chosen gauge • In the absence of noise, for an aligned network this is the identity connection • If the network is not aligned, a gauge transformation can be found that takes w to the identity

  5. Gauge-invariant Estimation in Networks Representative Results • If G is the real line and the noise on the edges of G is zero-mean Gaussian with covariance matrix R=s2I • The Fisher information is F=L/s2where L is and cofactor of the Laplacian of G • det F = t(G)/s2(|V(G)|-1) where t(G) denotes the number of spanning trees in G • The ML estimator of the connection x modulo any chosen gauge is unbiased with covariance F-1 and determinant 1/detF • Additional results for large classes of Lie groups G, including compact, non-compact, abelian, and non-abelian cases • More precise general formulation of the notion of gauge-invariant estimation on graphs and properties of the estimators (Allerton 2012)

  6. Kickoff Kickoff Value of Information Sensor Management via Riemannian Geometry VOI • Mutual Information and divergences have successful histories in sensor management surrogates for actual cost functions (i.e., representing VoI) • Problem: Given what is known, what sensor trajectory optimizes information gathering over the next T seconds? • Fact: The set of all Riemannian metrics on a Riemannian manifold M is a (weak) infinite dimensional Riemannian manifold M(M) • Observation 1: The Riemannian metrics on M corresponding to Fisher information constitute a submanifold of M, and • Observation 2: For a particular problem of estimating a parameter from sensor data, each choice of sensor corresponds to a Riemannian metric on M lying in this submanifold of M • Sensing Action S log-Likelihood lSon M •  Fisher Information FS on M •  Riemannian metric on M •  Point in M(M) 1st year review. UCLA 2012

  7. Kickoff Kickoff Value of Information Sensor Management via Riemannian Geometry Estimation-theoretic Preliminaries VOI • Consider a family of conditional densities p(x|q) for a RV x on X parameterized by q in a smooth d-dimensional manifold M • For given x, p(x|q) defines the likelihood function on M • The log-likelihood lx: MR is defined by lx(q)=log p(x|q) • The optimal test for q versus q’ given data x is of the form • The Kullback-Leibler (KL) divergence • is a natural measure of discrimination on M 1st year review. UCLA 2012

  8. Kickoff Kickoff Value of Information Sensor Management via Riemannian Geometry Metric Geometry of Sensor Selection VOI • A Riemannian metric on a smooth manifold is a (positive definite) inner product on each tangent space that varies smoothly from point to point • Although the KL divergence is not symmetric, it induces a Riemannian metric on M • This Fisher information metric is defined by • The corresponding volume form • is the Jeffreys prior on M • Under suitable assumptions, a metric on M is given by where dP = volF or, more generally, is a probability density on M 1st year review. UCLA 2012

  9. Kickoff Sensor Riemannian Metrics Manifold of M S Manifold g Value of Information Sensor Management via Riemannian Geometry Manifold of Sensor Configurations VOI • Suppose that the sensor configuration is parameterized by a smooth manifold S • A configuration sϵS gives rise to a particular Riemannian metric gs on M • The mapping g taking s to gs will be assumed to be smooth and one-to-one (e.g., an immersion) • Although M is infinite-dimensional, the trajectory planning takes place in a finite-dimensional sub-manifold that inherits its metric structure from M Geometrically, optimal navigation in S is via geodesics The geometry here is defined directly in terms of Fisher information 1st year review. UCLA 2012

  10. Kickoff Kickoff Value of Information Sensor Management via Riemannian Geometry Geodesics VOI • The geodesic structure of M has been studied outside the context of information geometry • The “energy integral” of a smooth curve g : [0,1]  M is • Geodesics in M are extremals of Eg; they satisfy • With g restricted to S, Egbecomes an integral Ig on S defined with respect to pullbacks of the quantities in the energy integral on M • Geodesics in S are extremals of Ig, which satisfy where G denotes the Cristoffel symbol for the Levi-Civita connection on M 1st year review. UCLA 2012

  11. Sensor Management via Riemannian Geometry Computational Example • Mobile sensor platforms with bearings-only sensors seek to localize a stationary emitter • The parameter manifold M is R2 – the position of the emitter in the plane • Noise is independent von Mises– mean zero, concentration parameter k • To simplify computation, sensors are constrained to remain at right angles with respect to emitter

  12. Kickoff !6 !5 !3 !1 !2 !4 !7 !8 Measurement Selection for Observability New Topic in Collaboration with ARL VOI Started summer 2012 during Utku Ilkturk’s six-week visit to ARL Continuing in collaboration with Brian Sadler • In a stochastic dynamical system, suppose the state can be measured at each time instant via a measurement map that is selectable from a library • E.g., xϵRd with x(k+1)=Ax(k)+w(k) and y(k)=Cs(k)+n(k),k=1,2,… where Cs is selectable • What is the most informative measurement sequence, subject to constraints, for “observation” • In terms of estimation fidelity for x(0)? • For numerical conditioning? • For hypothesis testing on functions of x(0) with myopic, finite-horizon, and infinite horizon objectives? • What if the dynamics are driven by an adversary? • How do biological systems manage measurement of dynamical information for sensorimotor control? • Brian’s ongoing collaborations at UMD • What can we learn about quantifying value of information for to support multi-faceted and dynamic tasks?

More Related