1 / 26

BAE 790I / BMME 231 Fundamentals of Image Processing Class 22

BAE 790I / BMME 231 Fundamentals of Image Processing Class 22. Information Theory Maximum Entropy Equivalence with MAP Review of Restoration. Information Theory. What information does an image convey? Consider the histogram of a gray-level image:. This could be a probability distribution:.

gerald
Download Presentation

BAE 790I / BMME 231 Fundamentals of Image Processing Class 22

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BAE 790I / BMME 231Fundamentals of Image ProcessingClass 22 • Information Theory • Maximum Entropy • Equivalence with MAP • Review of Restoration

  2. Information Theory • What information does an image convey? • Consider the histogram of a gray-level image: This could be a probability distribution:

  3. Information Theory • Define the information content of a particular gray level to be: • This has units of bits. • Information content of a gray level is inversely related to the probability of seeing that level. • If a gray level is less likely, it conveys more information when it does appear.

  4. Entropy • The average information in the entire image is called the entropy: • For a given number of gray levels, the entropy is maximum if all piare equal; i.e., all gray levels are equally likely.

  5. Entropy • In terms of looking at intensity histograms, the entropy measures the uniformity of the gray-level distribution. • Consider that the apportionment of total intensity in an image could be a distribution as well: • Use entropy as a measure of smoothness

  6. Maximum Entropy • Consider the set of all images with total intensity T: • Consider the image to be a distribution of the total intensity. Its entropy could than be defined as:

  7. Maximum Entropy • The following is equivalent and more widely used: • Note that we are using a natural log here

  8. Maximum Entropy • As an example, let us find the image with maximum entropy such that the total intensity is T. • This is a constrained optimization. Use Lagrange multipliers. • Set up an objective function with a Lagrange parameter for the constraint • Take derivative and solve for both f and the constraint parameter

  9. Maximum Entropy • The objective function: • Take derivatives and set to zero

  10. Maximum Entropy • Solve for f: • Therefore, all of the pixels are the same. • The constraint on total intensity lets us solve for l, so that: The image with maximum entropy is uniformly gray.

  11. Maximum Entropy • So, what does this have to do with restoration? • Consider a case where we don’t know the true noise distribution (Rnn unknown). • Assume that we only know the expectation of the total noise level in the image:

  12. Maximum Entropy • By our imaging model, n = g - Hf • It should then be true (on average) that: • It makes sense for us to obtain an image estimate that meets this condition:

  13. Maximum Entropy • An argument for this approach is that maximum likelihood tries to overfit the data, matching the signal as well as noise. • Maximum entropy says that the image estimate should only be as close to the measured data as the expected noise level will allow.

  14. Maximum Entropy • The condition for our image estimate is this: • There are an infinite number of images that meet this condition. • Which do we choose?

  15. Maximum Entropy • Choose the smoothest image estimate that meets this condition: • Smoothness is measured as the image with largest average information content. • Choose the image with maximum entropy that meets the above condition.

  16. Maximum Entropy • Argument: The image with maximum entropy injects the smallest amount of information into the solution process. • This means that we do not introduce correlations that are not supported by the measured data. • We’ll see….

  17. Maximum Entropy • Find the image estimate: • This is another constrained optimization. Use Lagrange multipliers again. • First, set up an objective function:

  18. Maximum Entropy • The usual approach: • This cannot be solved explicitly for f. • Must use iterative methods.

  19. Maximum Entropy • The solution must be positive because the entropy is not defined for negative values. • ME is considered a good choice for problems with insufficient information since it does not make many assumptions • Or does it?

  20. Maximum Entropy • Consider a MAP solution where: • The objective function is then:

  21. Maximum Entropy • Compare these: • If l = 2 s2b, then these are the same except for a scale factor. The same image maximizes both.

  22. Maximum Entropy • Maximum Entropy is the same as MAP with a certain prior:

  23. Maximum Entropy • Remember that the MAP solution pushes the solution away from ML toward the maximum of the prior: • This is a flat gray image. • Therefore, maximum entropy achieves a solution between the ML solution and a flat gray image. • It works well on astronomy images.

  24. Maximum Entropy True ML Noiseless ML Noisy ML + filter ME

  25. Maximum Entropy Profiles Maximum Entropy smooths by bringing down the intensities in the image.

  26. Summary of Statistical Restoration Bayesian methods P[f|g] MAP solution maxP[f|g] MMSE solution E[f|g] ME solution special P[f] ML solution P[f] uniform Wiener filter WLS solution Rnn diagonal LS solution Rnn stationary

More Related