In vision science, cascades of Linear+Nonlinear transforms are very successful in modeling a number of perceptual experiences [1]. However, the conventional literature is usually too focused on only describing the forward input-output transform. Instead, in this work we present the mathematics of such cascades beyond the forward transform, namely the Jacobian matrices and the inverse. These analytic results are important for three reasons: (a) they are strictly necessary in new experimental methods based on the synthesis of visual stimuli with interesting geometrical properties, (b) they are convenient to learn the model from classical experiments or alternative goal optimization, and (c) they are a promising model-based alternative to blind machine-learning methods for neural decoding. Moreover, the statistical properties of the neural model are more intuitive by using this kind of vector formulation. The theory is checked by building and testing a vision model that actually follows the modular program suggested in [1]. Our derivable and invertible model consists of a cascade of modules that account for brightness, contrast, energy masking, and wavelet masking. To stress the generality of this modular setting we show examples where some of the canonical Divisive Normalization modules are substituted by equivalent modules such as the Wilson-Cowan interaction model [2, 3] (at the V1 cortex) or a tone-mapping model [4] (at the retina). In the Discussion we address three illustrative applications. First, we show how the Jacobian (w.r.t. the input) plays a major role in setting the model by allowing novel psychophysics based on the geometry of the neural representation (as in [5]). Second, we show how the Jacobian (w.r.t. the parameters) can be used to find the model that better reproduces classical psychophysics of image distortion. In fact, thanks to the presented derivatives, this cascade of isomorphic canonical modules has been psychophysically tuned to work together for the first time. Third, we show how the analytic inverse may improve regression-based visual brain decoding.

In this paper we consider an image decomposition model that provides a novel framework for image denoising. The model computes the components of the image to be processed in a moving frame that encodes its local geometry (directions of gradients and level-lines). Then, the strategy we develop is to denoise the components of the image in the moving frame in order to preserve its local geometry, which would have been more affected if processing the image directly. Experiments on a whole image database tested with several denoising methods show that this framework can provide better results than denoising the image directly, both in terms of PSNR and SSIM [27] metrics.

%B IEEE Transactions on Image Processing
%G eng
%0 Conference Paper
%B Proceedings of International Conference on Scale Space and Variational Methods in Computer Vision (SSVM)
%D 2015
%T Duality Principle for Image Regularization and Perceptual Color Correction Models
%A Thomas Batard
%A Marcelo Bertalmío
%B Proceedings of International Conference on Scale Space and Variational Methods in Computer Vision (SSVM)
%@ 978-3-319-18461-6
%G eng
%R 10.1007/978-3-319-18461-6_36
%0 Conference Paper
%B International Conference on Image and Signal Processing (ICISP). *Best Paper Award*
%D 2014
%T Denoising an Image by Denoising its Components in a Moving Frame
%A Gabriela Ghimpeteanu
%A Thomas Batard
%A Marcelo Bertalmío
%A Stacey Levine
%X In this paper, we provide a new non-local method for image denoising. The key idea we develop is to denoise the components of the image in a well-chosen moving frame instead of the image itself. We prove the relevance of our approach by showing that the PSNR of a grayscale noisy image is lower than the PSNR of its components. Experiments show that applying the Non Local Means algorithm of Buades et al. [5] on the components provides better results than applying it directly on the image.

%B International Conference on Image and Signal Processing (ICISP). *Best Paper Award* %G eng