1 / 34

Exploring the complexity limits of joint data detection and channel estimation

Exploring the complexity limits of joint data detection and channel estimation. Achilleas Anastasopoulos EECS Department, University of Michigan, Ann Arbor, MI. University of Parma, Italy May 3, 2004. Overview. Motivation Theory

denton
Download Presentation

Exploring the complexity limits of joint data detection and channel estimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Exploring the complexity limits of joint data detection and channel estimation Achilleas Anastasopoulos EECS Department, University of Michigan, Ann Arbor, MI University of Parma, Italy May 3, 2004

  2. Overview • Motivation • Theory • Exact detection/estimation in less than exponential complexity for a class of problems • Specific example: sequence and symbol-by-symbol detection in highly correlated fading • Application • the family of “ultra-fast” decoders • Extensions: arbitrary correlation, space-time codes, etc. • Conclusions

  3. Given complex numbers • find • that maximize the quantity • Solution: Motivation: a simple problem

  4. Given complex numbers • find • that maximize the quantity Motivation: a harder problem

  5. Motivation: a harder problem • Solution: more difficult because we cannot • decompose it into 3 smaller problems

  6. Sequence of M-PSK symbols, AWGN Motivation: a communication problem • Data detection in correlated fading (unknown to the receiver) • Maximum Likelihood Sequence Detection (MLSqD): Complex Gaussian random process (fading),

  7. Motivation: a communication problem • Since the transition metric depends on the entire sequence, no dynamic programming (e.g., V.A.) solution is available  complexity of optimal solution is exponential in N (i.e., test every possible sequence of length N) • However if the channel coherence time is approximately L • Conclusion: Complexity of approximate algorithms is roughly exponential in L (counterintuitive: the slower the channel, the more complex the decoding !?!) • Why is this problem relevant today?

  8. Coded bits Coding in channels with memory Channel Constraints • According to the traditional belief, generation of the exact messages for decoding has exponential complexity w.r.t. channel coherence time … Code Constraints e.g., parity-check equations

  9. Questions • How accurate is the conventional wisdom that exact joint detection and estimation requires exponential complexity with respect to the channel coherence time? • What is the connection with the problem of decoding turbo-like codes at low-SNR? • What is the impact of the above question on the design of near-optimal approximate algorithms suited for ultra-fast integrated circuit implementation?

  10. The basic problem • In order to present all the ideas, let’s look at the simple problem of MAPSqD of an uncoded sequence in highly correlated fading • All results generalize to the case of symbol-by-symbol soft metric generation (MAPSbSD). • A concrete example will be used throughout the talk.

  11. Working example • Uncoded M-PSK data sequence in complex Gaussian fading (fading affects both amplitude and phase). • Fading remains constant over N symbols (time selective fading with long memory)

  12. Perfect CSI case which can be decomposed into N simple, symbol-by-symbol minimum distance problems

  13. MAPSqD Solution (no CSI) • Complexity of maximizing seems exponential w.r.t. N (metric cannot be decomposed) • For M-PSK, each of the MNsequences needs to be tested explicitly.

  14. Approximations • Approximate solutions (developed over the last 15 years): • Memory truncation: • Linear predictive receiver [LoMo90], [YuPa95], etc. • Non-exhaustive search: PSP, M-algorithm, T-algorithm [RaPoTz95], [SeFi95], etc. • Expectation-Maximization [GeHa97] • They are all effective (especially for small channel memory)

  15. Basic contribution of this work • The exact MAPSqD solution for this problem (and other problems of interest in communications) can be obtained with only polynomial complexity w.r.t. N • Contrary to traditional belief, the slower the channel, the smaller the complexity • The proof of this statement hints at approximate solutions with linear (and very small) complexity w.r.t. N

  16. Sketch of proof • First, transform the MAPSqD problem to a more complicated double-maximization problem • This is an exact equality • Average likelihood  generalized likelihood

  17. More definitions… • Sequence-conditioned parameter estimate (Least Squares solution) • Parameter-conditioned sequence estimate (linear complexity w.r.t. N ) • Order of maximization: two possible approaches

  18. Parameter estimator Metric Function Parameter estimator Metric Function arg max Parameter estimator Metric Function Estimator Correlator Approach A: Estimator-correlator Obviously exponential complexity w.r.t. N

  19. Metric Function Known Parameter Detector Metric Function arg max Known Parameter Detector Known Parameter Detector Metric Function Known Parameter Detector Metric Function Approach B: Parameter space scan Unfortunately, this method has infinite complexity

  20. Sufficient set for detection • Function remains constant for a subset of (this implies an optimal partition of ) Known Parameter Detector Known Parameter Detector Known Parameter Detector Key idea

  21. Function remains constant for a subset of (this implies an optimal partition of ) • Combine this with the Estimator-Correlator • structure (Approach A) Parameter estimator Metric Function Known Parameter Detector arg max Known Parameter Detector Parameter estimator Metric Function Known Parameter Detector Parameter estimator Metric Function Key idea

  22. Almost there… • Sufficient set size |T|~ N2 AND • There is a recursive algorithm to find T with complexity ~N2 THUS • can be found with polynomial complexity w.r.t. N QED

  23. Parameter space partitioning example • For each a parameter space • boundary is defined (for BPSK) by the • equation • which represents a line in the complex plain

  24. Parameter space partitioning example Complex plane l2 ^ l2 ^ l1 l1

  25. Connection with Sphere Decoding • No connection whatsoever with sphere decoding • Sphere decoding: worst case complexity is exponential, but average complexity (at high SNR) is polynomial (for sufficiently small N) • This approach: proves (worst-case/average-case) polynomial complexity irrespective of SNR • However, sphere decoding is applicable to a much wider class of problems • Possible research direction: combine the two approaches

  26. Symbol-by-Symbol Detection • What if we need to generate SbS reliability information (e.g., for turbo detection) ? • Define a suitable metric: • Need to marginalize the sequence metric over nuisance parameters. • If you choose max as the marginalization operator (over sequences) the problem becomes very similar to MAPSqD, and can be solved with polynomial complexity.

  27. Symbol-by-Symbol Detection • Set T is no longer sufficient • Sufficient set can be found by expanding T i.e., flip one bit at a time in each sequence in T • Exact version of the bit flipping (or toggle/swap) approximate algorithms

  28. Practical Implications: the ultra-fast receiver • Previous results have mostly conceptual value • However, the optimal algorithm hints at some ultra-fast approximate solutions • Instead of finding the optimal partition, use an arbitrary partition of the parameter space • This implies an approximate set T’ • The rest of the decoder remains as in the exact case (i.e., expansion of T’ by bit flipping, etc) • No multiplication operations required

  29. Example Block-independent, flat, complex fading channel (L=11) Length 4000 uniform LDPC code with variable and check node degrees 3 and 6, respectively.

  30. Generalization: arbitrary correlation • Memoryless fading  linear complexity • Constant fading  polynomial complexity • What happens in the general case ? • The answer depends on both the rank of the covariance matrix and its shape in a straightforward way

  31. Extension: multiple antennae • Can this receiver principle be extended to MIMO channels, i.e., space-time codes? • Extension to multiple Rx antennae is straightforward • Extension to multiple Tx antennae is trickier. Possible for: • Alamouti-type space-time codes • other orthogonal space-time codes (ongoing research)

  32. Example: 1Tx 2Rx

  33. Other applications • Joint data detection and forward phase and frequency acquisition/tracking • The parameter space is and is partitioned by straight lines (simple polygon processing algorithm: complexity~N3) • Algorithm remains exact for small and large frequency offsets • Also: 2-state trellis codes…

  34. Conclusions • Exact MAP Sq or SbS detection in channels with memory is not necessarily an NP-hard problem. • The proof of the above statement leads to new receiver structures (ultra-fast) • Performance has been verified for several applications • Extensions to certain classes of space-time codes is possible • Several other joint data detection/estimation problems can be put under the same framework

More Related