Blur-Aware Image Downsizing

Blur-Aware Image Downsizing

Resizing to a lower resolution can alter the appearance of an image. In particular, downsampling an image causes blurred regions to appear sharper. It is useful at times to create a downsampled version of the image that gives the same impression as the original, such as for digital camera viewfinders. To understand the effect of blur on image appearance at different image sizes, we conduct a perceptual study examining how much blur must be present in a downsampled image to be perceived the same as the original. We find a complex, but mostly image-independent relationship between matching blur levels in images at different resolutions. The relationship can be explained by a model of the blur magnitude analyzed as a function of spatial frequency. We incorporate this model in a new appearance-preserving downsampling algorithm, which alters blur magnitude locally to create a smaller image that gives the best reproduction of the original image appearance.

077160708052e2dda1ea1d09f37529c0?s=128

Matthew Trentacoste

April 12, 2011
Tweet

Transcript

  1. Blur-Aware Image Downsampling Matthew Trentacoste Rafał Mantiuk Wolfgang Heidrich University

    of British Columbia Bangor University
  2. Is the photograph blurry? 2

  3. Is the photograph blurry? 3

  4. Is the photograph blurry? 3

  5. Is the photograph blurry? 3

  6. Motivation • Sensors higher resolution than displays • Image display

    implies image downsizing • Conventional downsizing doesn’t accurately represent image appearance and perception of the image changes • Users can make inaccurate quality assessments when not viewing image pixels 1-to-1 with display pixels 4
  7. Motivation • Sensors higher resolution than displays • Image display

    implies image downsizing • Conventional downsizing doesn’t accurately represent image appearance and perception of the image changes • Users can make inaccurate quality assessments when not viewing image pixels 1-to-1 with display pixels 2 Mp 4
  8. Motivation • Sensors higher resolution than displays • Image display

    implies image downsizing • Conventional downsizing doesn’t accurately represent image appearance and perception of the image changes • Users can make inaccurate quality assessments when not viewing image pixels 1-to-1 with display pixels 2 Mp 3-22 Mp 4
  9. Motivation • Want to preserve appearance of blur when downsampling

    • Perceptual experiment: relation between blur present and perception at different sizes • New resizing operator that amplifies blur present to ensure the result is perceived the same as the original 5
  10. Organization • Related work • Experiment design + results •

    Model of perceived blur • Blur estimation • Accurate blur synthesis • Evaluation + conclusion 6
  11. Related work • Blur perception [Cufflin 2007][Chen 2009] [Mather 2002][Held

    2010] • Intelligent upsampling [Fattal 2007][Kopf 2007][Shan 2008] • Seam carving [Avidan 2007] [Rubenstein 2009,2010] 7
  12. Related work • Blind deconvolution [Lam 2000][Fergus 2006] • Spatially-variant

    blur estimation [Elder 1998][Liu 2008] • Blur magnification [Bae 2007][Samadani 2007] 8
  13. Perceptual study • Blur-matching experiment • Given large image with

    reference amount of blur present • Need to adjust blur in smaller images to match appearance of large • Repeated for between 0 and .26 visual degrees and downsamples of 2x 4x 8x &r ςr 9
  14. Perceptual study • Blur-matching experiment • Given large image with

    reference amount of blur present • Need to adjust blur in smaller images to match appearance of large • Repeated for between 0 and .26 visual degrees and downsamples of 2x 4x 8x &r ςr 9
  15. Perceptual study • Add uniform synthetic blur to full-size images

    with no noticeable blur present • Same process for thumbnails, with nearest neighbor sampling • 5 images selected from pre-study of 20 -- 150 conditions, trial subset of 30, 3x each • 24 observers participated in over 2100 trials 10
  16. Matching results • Matching blur larger than reference blur, smaller

    images appear sharper • Curves level off with larger blur, downsample -- blur sufficient to covey appearance • Reported values include any blur needed to remove aliasing artifacts • Viewing setup had Nyquist limit of 30 cpd - results not due to limited resolution in terms of pixels, but visual angle Full-size image blur radius ( ) [vis deg] &r 11
  17. Matching results • Matching blur larger than reference blur, smaller

    images appear sharper • Curves level off with larger blur, downsample -- blur sufficient to covey appearance • Reported values include any blur needed to remove aliasing artifacts • Viewing setup had Nyquist limit of 30 cpd - results not due to limited resolution in terms of pixels, but visual angle Full-size image blur radius ( ) [vis deg] &r 11 • Viewing setup had Nyquist limit of 30 cpd - results not due to limited resolution in terms of pixels, but visual angle
  18. Blur appearance model 0 5 10 15 20 25 30

    35 40 0 5 10 15 20 25 30 )XOOVL]HLPDJHEOXUFXWïRIIIUHTXHQF\ n r >F\FOHVSHUGHJUHH@ 'RZQVDPSOHGLPDJHEOXU FXWïRIIIUHTXHQF\ 6 >F\FOHVSHUGHJUHH@ x2 x4 x8 • Measured data well predicted by anti- aliasing filter and model in spatial frequencies • After removing , we model as a linear function in spatial frequencies • Full model provides accurate and plausible fit of the measured data in the spatial domain S ςd ςm 1/ς ςd S ςm ςd ςd 1/ς S S ς m = ς 2 d + S2 12
  19. Blur appearance model 0 5 10 15 20 25 30

    35 40 0 5 10 15 20 25 30 )XOOVL]HLPDJHEOXUFXWïRIIIUHTXHQF\ n r >F\FOHVSHUGHJUHH@ 'RZQVDPSOHGLPDJHEOXU FXWïRIIIUHTXHQF\ 6 >F\FOHVSHUGHJUHH@ x2 x4 x8 • Measured data well predicted by anti- aliasing filter and model in spatial frequencies • After removing , we model as a linear function in spatial frequencies • Full model provides accurate and plausible fit of the measured data in the spatial domain S ςd ςm 1/ς ςd S ςm ςd ςd 1/ς S S ς m = ς 2 d + S2 S(ς r , d) = 1 2−0.893 log2 (d)+0.197( 1 ς r − 1.64) + 1.89 12
  20. Blur appearance model 0 5 10 15 20 25 30

    35 40 0 5 10 15 20 25 30 )XOOVL]HLPDJHEOXUFXWïRIIIUHTXHQF\ n r >F\FOHVSHUGHJUHH@ 'RZQVDPSOHGLPDJHEOXU FXWïRIIIUHTXHQF\ 6 >F\FOHVSHUGHJUHH@ x2 x4 x8 0 0.05 0.1 0.15 0.2 0.25 0 0.1 0.2 0.3 0.4 0.5 Full size image blur radius n r [vis deg] Downsampled image blur radius n m [vis deg] x2 data x2 model x4 data x4 model x8 data x8 model • Measured data well predicted by anti- aliasing filter and model in spatial frequencies • After removing , we model as a linear function in spatial frequencies • Full model provides accurate and plausible fit of the measured data in the spatial domain S ςd ςm 1/ς ςd S ςm ςd ςd 1/ς S S ς m = ς 2 d + S2 S(ς r , d) = 1 2−0.893 log2 (d)+0.197( 1 ς r − 1.64) + 1.89 12
  21. Blur estimation Blur estimation 0px blur 15px blur • Spatially-variant

    estimate of the blur present at each pixel of image • Calibrate method of Samadani et al. to provide estimate of blur in absolute units • Downsampling approximates a blur-free image • Relation between width of a Gaussian profile and the peak value of its derivative 13
  22. Blur estimation 14 Edge Derivative

  23. Blur estimation 14 2x Edge Derivative

  24. Blur estimation 14 2x 4x Edge Derivative

  25. Blur estimation 14 2x 4x 8x Edge Derivative

  26. Blur estimation g (x, σ) = 1 √ 2πσ2 e

    − x2 √ 2σ2 Edge Gradient magnitude width: σ 15
  27. Blur estimation g (x, σ) = 1 √ 2πσ2 e

    − x2 √ 2σ2 Edge Gradient magnitude width: σ 15
  28. Blur estimation g (x, σ) = 1 √ 2πσ2 e

    − x2 √ 2σ2 Edge Gradient magnitude width: σ Downsampled scale space 15
  29. Blur estimation g (x, σ) = 1 √ 2πσ2 e

    − x2 √ 2σ2 Edge Gradient magnitude width: σ Downsampled scale space 15
  30. Blur estimation g (x, σ) = 1 √ 2πσ2 e

    − x2 √ 2σ2 Edge Gradient magnitude width: σ Downsampled scale space 15
  31. Blur estimation 1 2πσ2 o 1 2π σ2 o d

    + (βj)2 original gradients thumbnail gradients 16 j scale space level quantization term β d downsample
  32. Blur estimation 1 2π j2 d + (βj)2 1 2πj2

    original gradients thumbnail gradients 16 j scale space level quantization term β d downsample
  33. Blur estimation = 1 2π j2 d + (βj)2 1

    2πj2 γ original gradients thumbnail gradients 16 j scale space level quantization term β d downsample
  34. Blur estimation = 1 2π j2 d + (βj)2 1

    2πj2 γ 1 1 d 2 + β2 = γ original gradients thumbnail gradients 16 j scale space level quantization term β d downsample
  35. Blur estimation = 1 2π j2 d + (βj)2 1

    2πj2 γ 1 1 d 2 + β2 = γ original gradients thumbnail gradients 16 j scale space level quantization term β d downsample
  36. Blur estimation • Scaled original image gradients by gamma to

    align with scalespace • If jth level is the closest match to ro, implies a blur of j pixels in the original image • Thus ensuring the estimate blur corresponds to some absolute measure of pixels 17
  37. Blur synthesis • Model specifies desired blur, give blur present

    determine how much to add • Created thumbnail by standard downsample -- already includes anti-aliasing, so use model instead of • Given existing blur compute blur to add S ς m σo σa σa = S(σo ·p−1, d)·p d 2 − σ2 o 18
  38. Blur synthesis • Model specifies desired blur, give blur present

    determine how much to add • Created thumbnail by standard downsample -- already includes anti-aliasing, so use model instead of • Given existing blur compute blur to add S ς m σo σa σa = S(σo ·p−1, d)·p d 2 − σ2 o 18 S( o ·
  39. Blur synthesis • Model specifies desired blur, give blur present

    determine how much to add • Created thumbnail by standard downsample -- already includes anti-aliasing, so use model instead of • Given existing blur compute blur to add S ς m σo σa σa = S(σo ·p−1, d)·p d 2 − σ2 o 18 S( o ·p 1, d
  40. Blur synthesis • Model specifies desired blur, give blur present

    determine how much to add • Created thumbnail by standard downsample -- already includes anti-aliasing, so use model instead of • Given existing blur compute blur to add S ς m σo σa σa = S(σo ·p−1, d)·p d 2 − σ2 o 18 S( o ·p 1, d)·
  41. Blur synthesis • Model specifies desired blur, give blur present

    determine how much to add • Created thumbnail by standard downsample -- already includes anti-aliasing, so use model instead of • Given existing blur compute blur to add S ς m σo σa σa = S(σo ·p−1, d)·p d 2 − σ2 o 18 S( o ·p 1, d)·p d
  42. Blur synthesis • Model specifies desired blur, give blur present

    determine how much to add • Created thumbnail by standard downsample -- already includes anti-aliasing, so use model instead of • Given existing blur compute blur to add S ς m σo σa σa = S(σo ·p−1, d)·p d 2 − σ2 o 18
  43. Blur synthesis • To produce final image blur each level

    scalespace by corresponding , linearly blend for non-integer 19 σa σa lσj Blur map Final result Scalespace + =
  44. Evaluation Naive Samadani gamma=4 Samadani gamma=.5 Blur-Aware 20

  45. Evaluation Naive Blur-Aware Naive Blur-Aware 21

  46. original Evaluation 22 2x normal 2x blur-aware 4x blur-aware 4x

    normal 2x blur-aware 2x normal 4x blur-aware 4x blur-aware 4x normal Original Original 2x naive 2x naive 2x blur-aware 2x blur-aware 4x naive 4x aware 4x naive 4x aware
  47. Conclusion • Fully automatic image resizing operator that uses a

    perceptual metric to preserve image appearance • Effect due to HVS: The same metric can account for changes in appearance due to viewing distance • Future work: Other models like camera optics to enhance blur Extending principle to other attributes such as noise or contrast 23
  48. Thanks! ( you and our sponsors ) Research Chair