Image dehazing deals with the removal of undesired loss of visibility in outdoor images due to the presence of fog. Retinex is a color vision model mimicking the ability of the Human Visual System to robustly discount varying illuminations when observing a scene under different spectral lighting conditions. Retinex has been widely explored in the computer vision literature for image enhancement and other related tasks. While these two problems are apparently unrelated, the goal of this work is to show that they can be connected by a simple linear relationship. Specifically, most Retinex-based algorithms have the characteristic feature of always increasing image brightness, which turns them into ideal candidates for effective image dehazing by directly applying Retinex to a hazy image whose intensities have been inverted. In this paper, we give theoretical proof that Retinex on inverted intensities is a solution to the image dehazing problem. Comprehensive qualitative and quantitative results indicate that several classical and modern implementations of Retinex can be transformed into competing image dehazing algorithms performing on pair with more complex fog removal methods, and can overcome some of the main challenges associated with this problem.

1 aGaldran, Adrian1 aAlvarez-Gila, Aitor1 aBria, Alessandro1 aVazquez-Corral, Javier1 aBertalmío, Marcelo uhttps://arxiv.org/abs/1712.0275402648nas a2200145 4500008004100000245007100041210006900112520217500181100002902356700002002385700001902405700002402424700001702448856003702465 2017 eng d00aDerivatives and Inverse of Cascaded Linear+Nonlinear Neural Models0 aDerivatives and Inverse of Cascaded LinearNonlinear Neural Model3 a
In vision science, cascades of Linear+Nonlinear transforms are very successful in modeling a number of perceptual experiences [1]. However, the conventional literature is usually too focused on only describing the forward input-output transform. Instead, in this work we present the mathematics of such cascades beyond the forward transform, namely the Jacobian matrices and the inverse. These analytic results are important for three reasons: (a) they are strictly necessary in new experimental methods based on the synthesis of visual stimuli with interesting geometrical properties, (b) they are convenient to learn the model from classical experiments or alternative goal optimization, and (c) they are a promising model-based alternative to blind machine-learning methods for neural decoding. Moreover, the statistical properties of the neural model are more intuitive by using this kind of vector formulation. The theory is checked by building and testing a vision model that actually follows the modular program suggested in [1]. Our derivable and invertible model consists of a cascade of modules that account for brightness, contrast, energy masking, and wavelet masking. To stress the generality of this modular setting we show examples where some of the canonical Divisive Normalization modules are substituted by equivalent modules such as the Wilson-Cowan interaction model [2, 3] (at the V1 cortex) or a tone-mapping model [4] (at the retina). In the Discussion we address three illustrative applications. First, we show how the Jacobian (w.r.t. the input) plays a major role in setting the model by allowing novel psychophysics based on the geometry of the neural representation (as in [5]). Second, we show how the Jacobian (w.r.t. the parameters) can be used to find the model that better reproduces classical psychophysics of image distortion. In fact, thanks to the presented derivatives, this cascade of isomorphic canonical modules has been psychophysically tuned to work together for the first time. Third, we show how the analytic inverse may improve regression-based visual brain decoding.

In this paper we consider an image decomposition model that provides a novel framework for image denoising. The model computes the components of the image to be processed in a moving frame that encodes its local geometry (directions of gradients and level-lines). Then, the strategy we develop is to denoise the components of the image in the moving frame in order to preserve its local geometry, which would have been more affected if processing the image directly. Experiments on a whole image database tested with several denoising methods show that this framework can provide better results than denoising the image directly, both in terms of PSNR and SSIM [27] metrics.

1 aGhimpeteanu, Gabriela1 aBatard, Thomas1 aBertalmío, Marcelo1 aLevine, Stacey uhttp://ip4ec.upf.edu/ImageDenoising00413nas a2200109 4500008004100000020002200041245008600063210006900149100001900218700002400237856004200261 2015 eng d a978-3-319-18461-600aDuality Principle for Image Regularization and Perceptual Color Correction Models0 aDuality Principle for Image Regularization and Perceptual Color 1 aBatard, Thomas1 aBertalmío, Marcelo uhttp://ip4ec.upf.edu/DualityPrinciple01422nas a2200109 4500008004100000245007200041210006900113520105300182100001601235700002401251856003701275 2015 eng d00aDynamic range, light scatter in the eye and perceived image quality0 aDynamic range light scatter in the eye and perceived image quali3 aLight scatter in the eye can substantially reduce the dynamic range (DR) of the retinal signal. We quantify this effect by convolving high DR images with a point spread function that models eye scattering. We find that the resulting retinal DR can be described as the original DR raised to a power p. For images viewed in a dark background the exponent p is 0.68, while for image viewed with a dim background p is 0.45; this implies that a high DR image spanning seven orders of magnitude will span five in dark background conditions, but only three in dim background conditions. We also investigate the perceived quality of high DR images presented on an OLED monitor. We find that the highest quality is perceived when a dark background is used, but also note that the difference is most apparent when low key images are used. We investigate whether this effect can be explained by current image quality models.

1 aKane, David1 aBertalmío, Marcelo uhttp://ip4ec.upf.edu/ECVP_Poster00958nas a2200133 4500008004100000245006900041210006900110520050500179100002600684700001900710700002400729700001900753856005200772 2014 eng d00aDenoising an Image by Denoising its Components in a Moving Frame0 aDenoising an Image by Denoising its Components in a Moving Frame3 aIn this paper, we provide a new non-local method for image denoising. The key idea we develop is to denoise the components of the image in a well-chosen moving frame instead of the image itself. We prove the relevance of our approach by showing that the PSNR of a grayscale noisy image is lower than the PSNR of its components. Experiments show that applying the Non Local Means algorithm of Buades et al. [5] on the components provides better results than applying it directly on the image.

1 aGhimpeteanu, Gabriela1 aBatard, Thomas1 aBertalmío, Marcelo1 aLevine, Stacey uhttp://ip4ec.upf.edu/ImageDenoisingByComponents00823nas a2200109 4500008004100000245005600041210005600097520048400153100002400637700001900661856003300680 2013 eng d00aDenoising an Image by Denoising its Curvature Image0 aDenoising an Image by Denoising its Curvature Image3 aIn this article we argue that when an image is corrupted by additive noise, its curvature image is less aected by it, i.e. the PSNR of the curvature image is larger. We speculate that, given a denoising method, we may obtain better results by applying it to the curvature image and then reconstructing from it a clean image, rather than denoising the original image directly. Numerical experiments conrm this for several PDE-based and patch-based denoising algorithms.

1 aBertalmío, Marcelo1 aLevine, Stacey uhttp://ip4ec.upf.edu/node/9901056nas a2200121 4500008004100000245007200041210006900113260002000182520063400202100002400836700001900860856005500879 2012 eng d00a"Denoising an image by denoising its curvature image", IMA Preprint0 aDenoising an image by denoising its curvature image IMA Preprint cSeptember, 20123 aIn this article we show that when an image is corrupted by additive noise, its curvature image is less aected by it, i.e. the PSNR of the curvature image is larger. We conjecture that, given a denoising method, we may obtain better results by applying it to the curvature image and then reconstructing from it a clean image, rather than denoising the original image directly. Numerical experiments conrm this for several PDE-based and patch-based denoising algorithms. The improvements in the quality of the results bring us closer to the optimal bounds recently derived by Levin et al. [1, 2].