1 / 49

Current Trends in Image Quality Perception

Current Trends in Image Quality Perception. Mason Macklem Simon Fraser University http://www.cecm.sfu.ca/~msmackle. General Outline. Examine model of human visual system (HVS) Examine properties of human perception of images consider top-down/bottom-up distinction

Download Presentation

Current Trends in Image Quality Perception

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Current Trends in Image Quality Perception Mason Macklem Simon Fraser University http://www.cecm.sfu.ca/~msmackle

  2. General Outline • Examine model of human visual system (HVS) • Examine properties of human perception of images • consider top-down/bottom-up distinction • Discuss combinations of current models, based on different perceptual phenomena

  3. Quality-based Model

  4. Pros: Very nice theoretically Clearly-defined notions of quality Based on theory of cognitive human vision Flexible for application-specific model Cons: Practical to implement? Subject-specific definition of quality Subjects more accurate at determining relative vs. absolute measurement Quality-based model

  5. Simplified approach

  6. Quality vs. Fidelity

  7. Based on properties of HVS Models eye’s reaction to various stimuli eg. mach band, sine grating, Gabor patch Assumes linear model to extend tests to complex images Based on properties of Human Attention Models subjects’ reactions to different types of image content eg. Complex, natural images Bypasses responses to artificial stimuli Perception vs Semantic Processing

  8. Human Visual System Model • Breaks process of image-processing into interaction of contrast information with various parts of the eye • Motivates representation by discrete filters

  9. Cornea and lens focus light onto retina • Retina consists of millions of rods and cones • rods: low-light vision • cones: normal lighting • rods:cones => 60:1 • Fovea consists of densely packed cones • processing focusses on foveal signals

  10. Motivation for Frequency Response Model • Errors in image reconstruction are differences in pixel values • Interpreted visually as differences in luminance and contrast values (ie. physical differences) • Model visual response to luminance and localized contrast to predict visible errors • assuming linear system, measurable using response to simple phenomena

  11. Visible Differences Predictor (VDP) Scott Daly

  12. Contrast Sensitivity Function (CSF) • increasing frequency levels can be resolved to limited extent • CSF: represents limitations on detecting differences in increasing frequency stimuli • specific to given lens and viewing conditions • Derive by capturing images for increasing frequency gratings

  13. Common Test Stimuli Sine grating Gabor patch Mach band

  14. Some Common CSFs Daly’s CSF (VDP)

  15. Cortex Transform • Used to simulate sensitivity of visual cortex to orientation and frequency • Splits frequency domain into 31 (?) sections, each of which is inverse transformed separately

  16. Masking Filter • Nonlinear filter to simulate masking due to local contrast • function of background contrast • Masking calculated separately using reactions to sine grating and Gaussian noise • Uses learning model to simulate prediction of background noise • similar noise across images lessens overall effect

  17. Probability Summation • Describes the increase in the probability of detection as the signal contrast increases • Calculates contrast difference between the two images, for each of the (31) images • In most cases, the signs will agree in every pixel for each cortex band • use the agreed sign as the sign of the probability • Overall probability is product over all (31) cortex transformed images • See book for example of Detection Map

  18. Stimulus driven eg. Search based on motion, colour, etc. Useful for efficient search Attracted to objects rather than regions attention driven by object properties Task/motivation-based eg. Search based on interpreting content Not as noticeable during search Motivation-based search still shows effects of object properties Bottom-up vs. Top-down

  19. Saccades & Drifts • Rapid eye movements • occur 2-3 times/second • HVS responds to changes in stimuli • Saccades: search for new ROI, or refocus on current ROI • Drifts: slow movement away from centre of ROI to refresh image on retina VeroniqueRuggirello

  20. Influences of Visual Attention • Measured with visual search experiments • subjects search for target item from group • target item present in half of samples • Two measures: • Reaction Time: time to find object correctly vs. number of objects in set • Accuracy: frequency of correct response vs. display time of stimulus • Efficient test: reaction time independent of set size

  21. Contrast EOS increases with increasing contrast relative to background

  22. Size EOS increases as size difference increases

  23. Location EOS increases when desired objects are located near center

  24. Even when image content is not centrally located, natural tendency is to focus on center of image

  25. Shape EOS increases as shape-difference “increases”

  26. Spatial Depth EOS increases as spatial depth increases

  27. Motivation/Context

  28. Who is this guy? Where was this photo taken?

  29. People Attention more sensitive to human shapes than inanimate objects

  30. Complexity EOS increases as complexity of background decreases

  31. Other features • Color: • EOS will increase as color-difference increases • Eg. Levi’s patch on jeans • Edges: • Edges attended more than textured regions • Predictability: • Attention directed towards familiar objects • Motion: • EOS will increase as motion-difference increases

  32. Region-of-Interest Importance Map (ROI) • Visual attraction directed to objects, rather than regions • Treats image as a collection of objects • Weights error w/i objects according to various types of attentive processes • Results in Importance Map • Weights correspond to probability that location will be attended directly

  33. ROI Design Model

  34. Image Segmentation

  35. Contrast

  36. Size

  37. Shape

  38. Location

  39. Background/Foreground

  40. W. Osberger

  41. Notes on ROI • VDP Detection Map: probability that existing pixel differences will be detected • ROI Importance Map: probability that existing visible pixel differences will be attended • Overall probability of detection should be a combination of both factors • Open question: single number for either model?

More Related