Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Manipulating Scale-Dependent Perception of Images

Manipulating Scale-Dependent Perception of Images

The purpose of most images is to effectively convey information. Implicit in this assumption is the fact that the recipient of that information is a human observer, with a visual system responsible for converting raw sensory inputs into the perceived appearance. The appearance of an image not only depends on the image itself, but the conditions under which it is viewed as well as the response of human visual system to those inputs. This thesis examines the scale-dependent nature of image appearance, where the same stimulus can appear different when viewed at varying scales, that arises from the mechanisms responsible for processing spatial vision in the brain. In particular, this work investigates changes in the perception of blur and contrast resulting from the image being represented by different portions of the viewer's visual system due to changes in image scale. These methods take inspiration from the fundamental organization of spatial image perception into multiple parallel channels for processing visual information and employ models of human spatial vision to more accurately control the appearance of images under changing viewing conditions. The result is a series of methods for understanding the blur and contrast present in images and manipulating the appearance of those qualities in a perceptually-meaningful way.

Matthew Trentacoste

November 04, 2011
Tweet

More Decks by Matthew Trentacoste

Other Decks in Research

Transcript

  1. Spatial vision • HVS processes information with multiple, parallel, band-limited

    channels • Attributes of each channel differ • Angular resolution of an image feature determines which channel • Operations that move content from one channel to another can alter image perception 5
  2. Overview 7 Spatially-variant blur estimation Synthetic DOF for mobile devices

    Blur-aware image downsampling Scale-dependent perception of countershading
  3. Overview 7 Spatially-variant blur estimation Synthetic DOF for mobile devices

    Blur-aware image downsampling Scale-dependent perception of countershading Defocus dynamic range expansion
  4. Blur estimation Blur estimation 0px blur 15px blur 9 •

    Calibrate method of Samadani et al. to provide estimate of blur in absolute units • Relation between width of a Gaussian profile and the peak value of its derivative
  5. Blur estimation g (x, σ) = 1 √ 2πσ2 e

    − x2 √ 2σ2 Edge Gradient magnitude width: σ 10
  6. Blur estimation g (x, σ) = 1 √ 2πσ2 e

    − x2 √ 2σ2 Edge Gradient magnitude width: σ 10
  7. Blur estimation g (x, σ) = 1 √ 2πσ2 e

    − x2 √ 2σ2 Edge Gradient magnitude width: σ Downsampled scale space 10
  8. Blur estimation g (x, σ) = 1 √ 2πσ2 e

    − x2 √ 2σ2 Edge Gradient magnitude width: σ Downsampled scale space 10
  9. Blur estimation g (x, σ) = 1 √ 2πσ2 e

    − x2 √ 2σ2 Edge Gradient magnitude width: σ Downsampled scale space 10
  10. 11 Noise sensitivity Noise free image + gradients Noisy image

    + gradients Actual vs estimated blur by noise level Image
  11. 11 Noise sensitivity Noise free image + gradients Noisy image

    + gradients Actual vs estimated blur by noise level Minimum reliable scale map
  12. Blur synthesis • Obtain map of estimated blur in image

    • Modify to represent pattern of desired f-number • Solve for blur to add given blur present 18 Narrower DOF Original Blur estimate Lens blur model
  13. Benefits and limitations 23 • Narrower DOF images than optics

    alone • Reasonable quality, efficient enough to be implemented on mobile devices • Use of Gaussian blur in synthesizing new DOF for efficiency
  14. Motivation • Sensors higher resolution than displays • Image display

    implies image downsizing • Conventional downsizing doesn’t accurately represent image appearance and perception of the image changes 26
  15. Motivation • Sensors higher resolution than displays • Image display

    implies image downsizing • Conventional downsizing doesn’t accurately represent image appearance and perception of the image changes 2 Mp 26
  16. Motivation • Sensors higher resolution than displays • Image display

    implies image downsizing • Conventional downsizing doesn’t accurately represent image appearance and perception of the image changes 2 Mp 3-22 Mp 26
  17. Perceptual study • Blur-matching experiment • Given large image with

    reference amount of blur present • Need to adjust blur in smaller images to match appearance of large • Repeated for between 0 and .26 visual degrees and downsamples of 2x 4x 8x &r ςr 28
  18. Perceptual study • Blur-matching experiment • Given large image with

    reference amount of blur present • Need to adjust blur in smaller images to match appearance of large • Repeated for between 0 and .26 visual degrees and downsamples of 2x 4x 8x &r ςr 28
  19. Matching results • Matching blur larger than reference blur, smaller

    images appear sharper • Curves level off with larger blur, downsample -- blur sufficient to covey appearance • Viewing setup had Nyquist limit of 30 cpd - results not due to limited resolution in terms of pixels, but visual angle Full-size image blur radius ( ) [vis deg] &r 29
  20. Matching results • Matching blur larger than reference blur, smaller

    images appear sharper • Curves level off with larger blur, downsample -- blur sufficient to covey appearance • Viewing setup had Nyquist limit of 30 cpd - results not due to limited resolution in terms of pixels, but visual angle Full-size image blur radius ( ) [vis deg] &r 29 • Viewing setup had Nyquist limit of 30 cpd - results not due to limited resolution in terms of pixels, but visual angle
  21. Evaluation 33 original Original 2x normal 2x blur-aware 2x naive

    2x blur-aware 4x blur-aware 4x normal 4x naive 4x aware
  22. Evaluation 34 Original 4x naive 4x aware 2x naive 2x

    blur-aware 2x blur-aware 2x normal 4x blur-aware 4x blur-aware 4x normal
  23. Benefits and limitations 35 • Fully automatic image resizing operator

    that uses a perceptual metric to preserve image appearance • Effect due to HVS: The same metric can account for changes in appearance due to viewing distance • Same limitations of blur estimate apply
  24. 40 −2 −1 0 1 0.2 0.4 0.6 0.8 1

    1.2 Profile width m [log 10 deg] Profile magnitude h Edge − high Edge − med Edge − low Scallop threshold −2 −1 0 1 0.2 0.4 0.6 0.8 1 1.2 Profile width m [log 10 deg] Profile magnitude h Coast Palm beach Building Model fit Scallop threshold Results • U-shaped curve • Subjects tolerated significantly more contrast for narrow/ wide countershading • Trough in the middle where almost any countershading deemed objectionable • Does not correspond to any known aspect of visual perception
  25. 45 Scale-aware displays • Determine distance of viewer using head-tracking

    • Present images for specific viewing conditions • Need headset Only works for one viewer
  26. 45 Scale-aware displays • Determine distance of viewer using head-tracking

    • Present images for specific viewing conditions • Need headset Only works for one viewer
  27. Benefits and limitations 47 • Model of the perceived of

    countershading • Several applications for introducing countershading displaying content at different sizes • Limited by the small set of study conditions • Findings not explainable by existing perceptual models -- best used as a heuristic until a larger study is conducted −2 −1 0 1 0.2 0.4 0.6 0.8 1 1.2 Profile width m [log 10 deg] Profile magnitude h Coast Palm beach Building Model fit Scallop threshold Countershading magnitude Spatial frequency Indistinguishable countershading Objectionable countershading (halos)
  28. Experiment • Filters: • Normal aperture • Gaussian • Veeraraghavan

    • Levn • Zhou • Deconvolution: • Wiener filtering • Richardson-Lucy • Bando • Levin 52 Morning Night
  29. Results, filter size 54 filter = Zhou noise = 0

    radius = 1 Weiner Richardson-Lucy Bando Levin
  30. Results, filter size 55 filter = Zhou noise = 0

    radius = 5 Weiner Richardson-Lucy Bando Levin
  31. Results, filter size 56 filter = Zhou noise = 0

    radius = 16 Weiner Richardson-Lucy Bando Levin
  32. Evaluation 57 • Effectiveness scene-dependent Can work for small bright

    regions, but not large • More complex algorithms work better, at higher computational cost • No filter+deconv combination was able to reduce the dynamic range without degrading image quality • 15px limit to blur radius
  33. Contributions • Framework of scale-dependent image perception • Mapping image

    features to visual channels via display size, resolution and viewing conditions • Applications of this perspective to manipulation of blur and contrast in images 59
  34. Future work • Improved separation of texture/noise • Improved blur

    estimation using other sensors in mobile devices • Improved blur synthesis using autofocus sweep 60
  35. Future work • Disparity between weighting of deconvolution algorithm and

    visual perception • Non-linear forms of deconvolution that distribute error closer to luminance quantization of HVS 61
  36. Future work 62 • Preserve appearance of other attributes when

    resizing • More comprehensive study of objectionable countershading, accounting for more conditions, contrast masking • Better means of displaying of scale-dependent content
  37. Future work 63 • Comprehensive model of related perception of

    blur and contrast • Explore lateral inhibition between visual channels • Develop framework similar to color management for addressing perception of images at different sizes
  38. 65

  39. 65