270 likes | 455 Views
BAE 790I / BMME 231 Fundamentals of Image Processing Class 22. Information Theory Maximum Entropy Equivalence with MAP Review of Restoration. Information Theory. What information does an image convey? Consider the histogram of a gray-level image:. This could be a probability distribution:.
E N D
BAE 790I / BMME 231Fundamentals of Image ProcessingClass 22 • Information Theory • Maximum Entropy • Equivalence with MAP • Review of Restoration
Information Theory • What information does an image convey? • Consider the histogram of a gray-level image: This could be a probability distribution:
Information Theory • Define the information content of a particular gray level to be: • This has units of bits. • Information content of a gray level is inversely related to the probability of seeing that level. • If a gray level is less likely, it conveys more information when it does appear.
Entropy • The average information in the entire image is called the entropy: • For a given number of gray levels, the entropy is maximum if all piare equal; i.e., all gray levels are equally likely.
Entropy • In terms of looking at intensity histograms, the entropy measures the uniformity of the gray-level distribution. • Consider that the apportionment of total intensity in an image could be a distribution as well: • Use entropy as a measure of smoothness
Maximum Entropy • Consider the set of all images with total intensity T: • Consider the image to be a distribution of the total intensity. Its entropy could than be defined as:
Maximum Entropy • The following is equivalent and more widely used: • Note that we are using a natural log here
Maximum Entropy • As an example, let us find the image with maximum entropy such that the total intensity is T. • This is a constrained optimization. Use Lagrange multipliers. • Set up an objective function with a Lagrange parameter for the constraint • Take derivative and solve for both f and the constraint parameter
Maximum Entropy • The objective function: • Take derivatives and set to zero
Maximum Entropy • Solve for f: • Therefore, all of the pixels are the same. • The constraint on total intensity lets us solve for l, so that: The image with maximum entropy is uniformly gray.
Maximum Entropy • So, what does this have to do with restoration? • Consider a case where we don’t know the true noise distribution (Rnn unknown). • Assume that we only know the expectation of the total noise level in the image:
Maximum Entropy • By our imaging model, n = g - Hf • It should then be true (on average) that: • It makes sense for us to obtain an image estimate that meets this condition:
Maximum Entropy • An argument for this approach is that maximum likelihood tries to overfit the data, matching the signal as well as noise. • Maximum entropy says that the image estimate should only be as close to the measured data as the expected noise level will allow.
Maximum Entropy • The condition for our image estimate is this: • There are an infinite number of images that meet this condition. • Which do we choose?
Maximum Entropy • Choose the smoothest image estimate that meets this condition: • Smoothness is measured as the image with largest average information content. • Choose the image with maximum entropy that meets the above condition.
Maximum Entropy • Argument: The image with maximum entropy injects the smallest amount of information into the solution process. • This means that we do not introduce correlations that are not supported by the measured data. • We’ll see….
Maximum Entropy • Find the image estimate: • This is another constrained optimization. Use Lagrange multipliers again. • First, set up an objective function:
Maximum Entropy • The usual approach: • This cannot be solved explicitly for f. • Must use iterative methods.
Maximum Entropy • The solution must be positive because the entropy is not defined for negative values. • ME is considered a good choice for problems with insufficient information since it does not make many assumptions • Or does it?
Maximum Entropy • Consider a MAP solution where: • The objective function is then:
Maximum Entropy • Compare these: • If l = 2 s2b, then these are the same except for a scale factor. The same image maximizes both.
Maximum Entropy • Maximum Entropy is the same as MAP with a certain prior:
Maximum Entropy • Remember that the MAP solution pushes the solution away from ML toward the maximum of the prior: • This is a flat gray image. • Therefore, maximum entropy achieves a solution between the ML solution and a flat gray image. • It works well on astronomy images.
Maximum Entropy True ML Noiseless ML Noisy ML + filter ME
Maximum Entropy Profiles Maximum Entropy smooths by bringing down the intensities in the image.
Summary of Statistical Restoration Bayesian methods P[f|g] MAP solution maxP[f|g] MMSE solution E[f|g] ME solution special P[f] ML solution P[f] uniform Wiener filter WLS solution Rnn diagonal LS solution Rnn stationary