1 / 60

Natural Scene Statistics based No-Reference Image Quality Assessment

Natural Scene Statistics based No-Reference Image Quality Assessment. Lin ZHANG School of Software Engineering Tongji University 2015. 08. 26. Contents. Background introduction of IQA Opinion-aware based NR-IQA methods IL-NIQE: An opinion-unaware approach Summary.

Download Presentation

Natural Scene Statistics based No-Reference Image Quality Assessment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Natural Scene Statistics based No-Reference Image Quality Assessment • Lin ZHANG • School of Software Engineering • Tongji University • 2015. 08. 26

  2. Contents • Background introduction of IQA • Opinion-aware based NR-IQA methods • IL-NIQE: An opinion-unaware approach • Summary

  3. Background Introduction of IQA • The goal of the IQA research is to develop objective metrics for measuring image quality and the results should be consistent with the subjective judgments • Classification of the IQA problem • Full reference IQA (FR-IQA) • No reference IQA (NR-IQA) • Reduce reference IQA (RR-IQA)

  4. Background Introduction of IQA • FR-IQA metrics usually can be used in the following applications • Measure the performance of some image enhancement or restoration algorithms, such as algorithms for denoising, deblurring, dehazing, etc • Measure the performance of image compression algorithms • PSNR is a widely used one; however, it has severe problems

  5. Background Introduction of IQA • original Image • MSE=0, SSIM=1 • MSE=309, SSIM=0.928 • MSE=309, SSIM=0.987 • MSE=309, SSIM=0.580 • MSE=309, SSIM=0.641 • MSE=309, SSIM=0.730

  6. Background Introduction of IQA

  7. Background Introduction of IQA • Our works on FR-IQA • FSIM/FSIMc[1] • Has been widely adopted as a performance measure for image restoration methods • VSI[2] • Performance evalution of modern IQA metricshttp://sse.tongji.edu.cn/linzhang/IQA/IQA.htm [1] Lin Zhang et al., FSIM: A feature similarity index for image quality assessment, IEEE Trans. Image Processing, vol. 20, pp. 2378-2386, 2011 [2] Lin Zhang et al., VSI: A visual saliency induced index for perceptual image quality assessment, IEEE Trans. Image Processing 23 (10) 4270-4281, 2014

  8. Background Introduction of IQA • No-reference IQA • Devise computational models to estimate the quality of a given image as perceived by human beings • The only information an NR-IQA algorithm receives is the image whose quality is being assessed itself • Compared with FR-IQA, NR-IQA is more challenging

  9. Background Introduction of IQA • How do you think the quality of these two images? • Though you are not provided the ground-truth reference images, you may judge the quality of these two images as poor

  10. Background Introduction of IQA • An example to show the significance of NR-IQA research quality score = 39.3264 quality score = 87.7649 Computed by IL-NIQE, the small the better

  11. Contents • Background introduction of IQA • Opinion-aware based NR-IQA methods • IL-NIQE: An opinion-unaware approach • Summary

  12. Opinion-aware based NR-IQA Methods • Opinion-aware approaches • These approaches require a dataset comprising distorted images and associated subjective scores • At the training stage, feature vectors are extracted from images and then the regression model, mapping the feature vectors to the subjective scores, is learned • At the testing stage, a feature vector is extracted from the test image, and its quality score can be predicted by inputting the feature vector to the learned regression model

  13. Opinion-aware based NR-IQA Methods • Opinion-aware approaches • feature vectors

  14. Opinion-aware based NR-IQA Methods • Opinion-aware approaches • BIQI [1] • BRISQUE [2] • BLIINDS [3] • BLIINDS-II [4] • DIIVINE [5] • CORNIA [6] • LBIQ [7]

  15. Opinion-aware based NR-IQA Methods • Opinion-aware approaches • [1] A. Moorthy and A. Bovik, A two-step framework for constructing blind image quality indices, IEEE Sig. Process. Letters, 17: 513-516, 2010 • [2] A. Mittal, A.K. Moorthy, and A.C. Bovik, No-reference image quality assessment in the spatial domain, IEEE Trans. Image Process., 21: 4695-4708, 2012 • [3] M.A. Sadd, A.C. Bovik, and C. Charrier, A DCT statistics-based blind image quality index, IEEE Sig. Process. Letters, 17: 583-586, 2010 • [4] M.A. Sadd, A.C. Bovik, and C. Charrier, Blind image quality assessment: A natural scene statistics approach in the DCT domain, IEEE Trans. Image Process., 21: 3339-3352, 2012 • [5] A.K. Moorthy and A.C. Bovik, Blind image quality assessment: from natural scene statistics to perceptual quality, IEEE Trans. Image Process., 20: 3350-3364, 2011 • [6] P. Ye, J. Kumar, L. Kang, and D. Doermann, Unsupervised feature learning framework for no-reference image quality assessment, CVPR, 2012 • [7] H. Tang, N. Joshi, and A. Kapoor. Learning a blind measure of perceptual image quality, CVPR, 2011

  16. Opinion-aware based NR-IQA Methods • Existing opinion-aware approaches have weak generalization capability • It is difficult to collect enough training samples for various distortion types and their combinations • If a BIQA model trained on a certain set of distortion types is applied to a test image containing a different distortion type, the predicted quality score will be unpredictable and likely inaccurate • Existing BIQA models have been trained on and thus are dependant to some degree on one of the available public databases; when applying a model learned on one database to another database, or to real-world distorted images, the quality prediction performance can be very poor

  17. Opinion-unaware approaches proposed recently • These approaches DONOT require a dataset comprising distorted images and associated subjective scores • A typical method is NIQE [1] • Offline learning stage: constructing a collection of quality-aware features from pristine images and fitting them to a multivariate Gaussian (MVG) model M • Testing stage: the quality of a test image is expressed as the distance between a MVG fit of its features and M • [1] A. Mittal et al. Making a “completely blind” image quality analyzer. IEEE Signal Process. Letters, 20(3): 209-212, 2013.

  18. Contents • Background introduction of IQA • Opinion-aware based NR-IQA methods • IL-NIQE: An opinion-unaware approach • Motivations • NSS-induced quality-aware features • Pristine model learning • IL-NIQE index • Experimental results • Summary

  19. Motivations for IL-NIQE[1] • Design rationale • Natural images without quality distortions possess regular statistical properties that can be measurably modified by the presence of distortions • Deviations from the regularity of natural statistics, when quantified appropriately, can be used to assess the perceptual quality of an image • NSS-based features have been proved powerful. Any other NSS-based features? • IL-NIQE denotes Integrated-Local Natural Image Quality Evaluator [1] Lin Zhang et al., A feature-enriched completely blind image quality evaluator, IEEE Trans. Image Processing 24 (8) 2579-2591, 2015

  20. Offline pristine model learning Online quality evaluation of a test image … pristine images patch extraction test image patch extraction … n high-contrast patches … feature extraction k image patches feature extraction n feature vectors MVG fitting kfeature vectors quality score computation for each patch MVG parameters and final quality score pooling Flowchart of IL-NIQE

  21. Contents • Background introduction of IQA • Opinion-aware based NR-IQA methods • IL-NIQE: An opinion-unaware approach • Motivations • NSS-induced quality-aware features • Pristine model learning • IL-NIQE index • Experimental results • Summary

  22. NSS-induced quality-aware features • Statistics of normalized luminance • The mean subtracted contrast normalized (MSCN) coefficients have been observed to follow a unit normal distribution when computed from natural images without quality distortions [1] • This model, however, is violated when images are subjected to quality distortions; the degree of violation can be indicative of distortion severity • [1] D.L. Ruderman. The statistics of natural images. Netw. Comput. Neural Syst., 5(4):517-548, 1994.

  23. NSS-induced quality-aware features • Conforms to Gaussian • Statistics of normalized luminance • where

  24. NSS-induced quality-aware features • Statistics of normalized luminance • We use a generalized Gaussian distribution (GGD) to model the distribution of • Density function of GGD, • Parameters are used as quality-aware features

  25. NSS-induced quality-aware features • Statistics of MSCN products • The distribution of products of pairs of adjacent MSCN coefficients,In(x, y)In(x, y+1), In(x, y)In(x+1, y), In(x, y)In(x+1, y+1), and In(x, y)In(x+1, y-1), can also capture the quality distortion

  26. NSS-induced quality-aware features • Statistics of MSCN products • They can be modeled by asymmetric generalized Gaussian distribution (AGGD), • The mean of AGGD is • are used as “quality-aware” features

  27. NSS-induced quality-aware features • Statistics of partial derivatives and gradient magnitudes • We found that when introducing quality distortions to an image, the distribution of its partial derivatives, and gradient magnitudes, will be changed

  28. NSS-induced quality-aware features • Statistics of partial derivatives and gradient magnitudes • 1(c) • 1(b) • 1(a) • 1(d) • 1(e)

  29. NSS-induced quality-aware features • Statistics of partial derivatives and gradient magnitudes

  30. NSS-induced quality-aware features • Gradient magnitudes • Statistics of partial derivatives and gradient magnitudes • Partial derivatives • where,

  31. NSS-induced quality-aware features • Statistics of partial derivatives and gradient magnitudes • We use a GGD to model the distributions of Ix (or Iy) and take its parameters as features • We use a Weibull distribution [1] to model the distribution of the gradient magnitudes and use the parameters as features, • a and b are used as features • [1] J.M. Geusebroek and A.W.M. Smeulders. A six-stimulus theory for stochastic texture. Int. J. Comp. Vis., 62(1): 7-16, 2005.

  32. NSS-induced quality-aware features • Statistics of image’s responses to log-Gabor filters • Motivation: neurons in the visual cortex respond selectively to stimulus’ orientation and frequency, statistics on the images’ multi-scale multi-orientation decompositions should be useful for designing a NR-IQA model

  33. NSS-induced quality-aware features • radial part • angular part • Log-Gabor • Statistics of image’s responses to log-Gabor filters • For multi-scale multi-orientation filtering, we adopt the log-Gabor filter, • where is the orientation angle, is the center frequency, controls the filter’s radial bandwidth, and determines the angular bandwidth

  34. NSS-induced quality-aware features • Statistics of image’s responses to log-Gabor filters • With log-Gabor filters having J orientations and N center frequencies, we could get response maps • where and represents the image’s response to the real and imaginary part of the log-Gabor filter • We extract the quality-aware features as • Use a GGD model to fit the distribution of {en,j(x)} (or{on,j(x)}) and take the model parameters α and β as features. • use a GGD to model the distribution of partial derivatives of {en,j(x)} (or{on,j(x)}) and also take the two model parameters as features. • Use a Weibull model to fit the distribution of gradient magnitudes of {en,j(x)} (or{on,j(x)}) and take the corresponding parameters a and b as features

  35. NSS-induced quality-aware features • Statistics of colors • Ruderman et al. showed that in a logrithmic-scale opponent color space, the distributions of the image data conform well to Gaussian [1] [1] D.L. Ruderman et al. Statistics of cone response to natural images: implications for visual coding. J. Opt. Soc. Am. A, 15(8): 2036-2045, 1998.

  36. NSS-induced quality-aware features • to opponent color space • Statistics of colors • RGB to logarithmic signal with mean subtracted, • where <logX(x,y)> means the mean of logX(x,y)> • For natural images, l1, l2, and l3 conform well to Gaussian

  37. NSS-induced quality-aware features • Statistics of colors • 3(a) • 3(b) • 3(c)

  38. NSS-induced quality-aware features • Statistics of colors • We use Gaussian to fit the distribution ofl1, l2, and l3, • For eachl1, l2, andl3 channel, we estimate the two parametersζ andρ2and take them as quality-aware features

  39. Contents • Background introduction of IQA • Opinion-aware based NR-IQA methods • IL-NIQE: An opinion-unaware approach • Motivations • NSS-induced quality-aware features • Pristine model learning • IL-NIQE index • Experimental results • Summary

  40. Pristine model learning • Sample high quality images • The pristine model acts as a “standard” for representing characteristics of high quality images • It is learned from a pristine image set collected by us, which contains 90 high quality images

  41. Pristine model learning • Step 1: for each pristine image, it is partitioned into patches • Step 2: high contrast patches are selected based on local variance field • Step 3: for each selected patch, the quality-aware features are extracted. Thus, we can get a feature vector set, • where M is the number of patches and d is the feature dimension • d is very large, so we need a further dimension reduction operation

  42. Pristine model learning • After the dimension reduction, • Step 5: feed into a MVG model and regard it as the pristine model • where v is the mean vector and is the covariance matrix • The mean vector and the covariance matrix of the pristine model are denoted as v1 and • Step 4: dimension reduction by PCA • Suppose is the dimension reduction matrix

  43. Contents • Background introduction of IQA • Opinion-aware based NR-IQA methods • IL-NIQE: An opinion-unaware approach • Motivations • NSS-induced quality-aware features • Pristine model learning • IL-NIQE index • Experimental results • Summary

  44. IL-NIQE index • Step 1: partition the test image into patches • Step 2: for each patch, we extract from it a feature vector; thus, we can get a feature vector set, • where Mt denotes the number of patches extracted from test image • Step 3: reduce the dimension of yi as • Step 4: fit a MVG from and denote its covariance matrix as

  45. IL-NIQE index • Step 5: the quality qi of patch i is measured as • Such a metric is inspired from the Bhattacharyya distance • Step 6: quality score pooling

  46. Contents • Background introduction of IQA • Opinion-aware based NR-IQA methods • IL-NIQE: An opinion-unaware approach • Motivations • NSS-induced quality-aware features • Pristine model learning • IL-NIQE index • Experimental results • Summary

  47. Performance Metrics • How to evaluate the performance of IQA indices? • Some benchmark datasets were created • Reference images (quality distortion free) are provided • For each reference image, a set of distorted images are created; they suffer from kinds of quality distortions, such as Gaussian noise, JPEG compression, blur, etc; let’s suppose that there are altogether N distorted images • For each distorted image, there is an associated quality score, given by subjects; thus, altogether we have N scores • For distorted images, we can compute their objective quality scores by using an IQA index f; we can get N quality scores • f’s performance can be reflected by the rank order correlation coefficients between and

  48. Performance Metrics • How to evaluate the performance of IQA indices? Spearman rank order correlation coefficient (SRCC) where di is the difference between the ith image's ranks in the subjective and objective evaluations.

  49. Performance Metrics • How to evaluate the performance of IQA indices? Kendall rank order correlation coefficient (KRCC) where nc is the number of concordant pairs and nd is the number of discordant pairs

  50. Benchmark image datasets used • Dataset • Distorted Images No. • Distortion Types No. • Contains multiply-distortions? • TID2013 • 3000 • 24 • YES • CSIQ • 866 • 6 • NO • LIVE • 799 • 5 • NO • LIVE MD1 • 225 • 1 • YES • LIVE MD2 • 225 • 1 • YES Performance Metrics

More Related