By capturing a more complete rendition of scene light than standard 2D cameras, light-field technology represents an important step towards closing the gap between live action cinematography and computer graphics. Light-field cameras accomplish this by simultaneously capturing the same scene under different angular configurations, providing directional information that allows for a multitude of post-production effects. Among the practical challenges related to capturing multiple images simultaneously, a very important problem is the fact that the different images do not perfectly match in terms of color, which severely complicates all further processing. In this work we adapt and extend to the light-field scenario a color stabilization method previously proposed for standard multi-camera shoots, and demonstrate experimentally that it provides an improvement over the state-of-the-art techniques for light-field imaging.

%B ICASSP %G eng %0 Journal Article %J Journal of Mathematical Imaging and Vision %D 2020 %T Cortical-inspired Wilson-Cowan-type equations for orientation-dependent contrast perception modelling %A Marcelo Bertalmío %A Luca Calatroni %A Valentina Franceschi %A Benedetta Franceschiello %A Dario Prandi %XWe consider the evolution model proposed in [9, 6] to describe illusory contrast perception phenomena induced by surrounding orientations. Firstly, we highlight its analogies and differences with widely used Wilson-Cowan equations [48], mainly in terms of efficient representation properties. Then, in order to explicitly encode local directional information, we exploit the model of the primary visual cortex V1 proposed in [20] and largely used over the last years for several image processing problems [24,38,28]. The resulting model is capable to describe assimilation and contrast visual bias at the same time, the main novelty being its explicit dependence on local image orientation. We report several numerical tests showing the ability of the model to explain, in particular, orientation-dependent phenomena such as grating induction and a modified version of the Poggendorff illusion. For this latter example, we empirically show the existence of a set of threshold parameters differentiating from inpainting to perception-type reconstructions, describing long-range connectivity between different hypercolumns in the primary visual cortex.

%B Journal of Mathematical Imaging and Vision %G eng %0 Conference Paper %B Scale Space and Variational Methods in Computer Vision (SSVM2019) %D 2019 %T A connection between image processing and artificial neural networks layers through a geometric model of visual perception %A Thomas Batard %A Eduard Ramon Maldonado %A Gabriele Steidl %A Marcelo Bertalmío %XIn this paper, we establish a connection between image processing, visual perception, and deep learning by introducing a mathematical model inspired by visual perception from which neural network layers and image processing models for color correction can be derived. Our model is inspired by the geometry of visual perception and couples a geometric model for the organization of some neurons in the visual cortex with a geometric model of color perception. More precisely, the model is a combination of a Wilson-Cowan equation describing the activity of neurons responding to edges and textures in the area V1 of the visual cortex and a Retinex model of color vision. For some particular activation functions, this yields a color correction model which processes simultaneously edges/textures, encoded into a Riemannian metric, and the color contrast, encoded into a nonlocal covariant derivative. Then, we show that the proposed model can be assimilated to a residual layer provided that the activation function is nonlinear and to a convolutional layer for a linear activation function. Finally, we show the accuracy of the model for deep learning by testing it on the MNIST dataset for digit classication.

%B Scale Space and Variational Methods in Computer Vision (SSVM2019) %G eng %0 Conference Paper %B *Accepted* in Computer Vision and Pattern Recognition (CVPR) %D 2019 %T Convolutional Neural Networks Deceived by Visual Illusions %A Alexander Gomez-Villa %A Adrián Martín %A Javier Vazquez-Corral %A Marcelo Bertalmío %XVisual illusions teach us that what we see is not always what it is represented in the physical world. Its special nature make them a fascinating tool to test and validate any new vision model proposed. In general, current vision models are based on the concatenation of linear convolutions and non-linear operations. In this paper we get inspiration from the similarity of this structure with the operations present in Convolutional Neural Networks (CNNs). This motivated us to study if CNNs trained for low-level visual tasks are deceived by visual illusions. In particular, we show that CNNs trained for image denoising, image deblurring, and computational color constancy are able to replicate the human response to visual illusions, and that the extent of this replication varies with respect to variation in architecture and spatial pattern size. We believe that this CNNs behaviour appears as a by-product of the training for the low level vision tasks of denoising, color constancy or deblurring. Our work opens a new bridge between human perception and CNNs: in order to obtain CNNs that better replicate human behaviour, we may need to start aiming for them to better replicate visual illusions.

%B *Accepted* in Computer Vision and Pattern Recognition (CVPR) %G eng %0 Conference Paper %B Scale Space and Variational Methods in Computer Vision (SSVM2019) %D 2019 %T A cortical-inspired model for orientation-dependent contrast perception: a link with Wilson-Cowan equations %A Marcelo Bertalmío %A Luca Calatroni %A Valentina Franceschi %A Benedetta Franceschiello %A Dario Prandi %XWe consider a differential model describing neuro-physiological contrast perception phenomena induced by surrounding orientations. The mathematical formulation relies on a cortical-inspired modelling largely used over the last years to describe neuron interactions in the primary visual cortex (V1) and applied to several image processing problems. Our model connects to Wilson-Cowan-type equations and it is analogous to the one used in to describe assimilation and contrast phenomena, the main novelty being its explicit dependence on local image orientation. To conrm the validity of the model, we report some numerical tests showing its ability to explain orientation-dependent phenomena (such as grating induction) and geometric-optical illusions classically explained only by ltering-based techniques.

%B Scale Space and Variational Methods in Computer Vision (SSVM2019) %G eng %0 Journal Article %J *Submitted* %D 2018 %T Color matching images with unknown non-linear encodings %A Raquel Gil Rodríguez %A Javier Vazquez-Corral %A Marcelo Bertalmío %XWe present a color matching method that, given two different views of the same scene taken by two cameras with unknown settings and unknown internal parameter values, and encoded with unknown non-linear curves, is able to correct the colors of one of the images making it look as if it was captured under the other camera’s settings. Our method is based on treating the in-camera color processing pipeline as a matrix multiplication followed by a non-linearity. This allows us to model a color stabilization transformation among the two shots by estimating several parameters. The method is fast and the results have no spurious colors. It outperforms the state-of-the-art both visually and according to several metrics, and can handle very challenging real-life examples.

In cinema and TV it is quite usual to have to work with footage coming from several cameras, which show noticeable color differences among them even if they are all the same model. In TV broadcasts, technicians work in camera control units so as to ensure color consistency when cutting from one camera to another. In cinema post-production, colorists need to manually colormatch images coming from different sources. Aiming to help perform this task automatically, the Academy Color Encoding System (ACES) introduced a color management framework to work within the same color space and be able to use different cameras and displays; however, the ACES pipeline requires to have the cameras characterized previously, and therefore does not allow to work ‘in the wild’, a situation which is very common. We present a color stabilization method that, given two images of the same scene taken by two cameras with unknown settings and unknown internal parameter values, and encoded with unknown non-linear curves (logarithmic or gamma), is able to correct the colors of one of the images making it look as if it was captured with the other camera. Our method is based on treating the in-camera color processing pipeline as a combination of a 3x3 matrix followed by a non-linearity, which allows us to model a color stabilization transformation among two shots as a linear-nonlinear function with several parameters. We find corresponding points between the two images, compute the error (color difference) over them, and determine the transformation parameters that minimize this error, all automatically without any user input. The method is fast and the results have no spurious colors or spatio-temporal artifacts of any kind. It outperforms the state of the art both visually and according to several metrics, and can handle very challenging real-life examples.

%B SMPTE Annual Technical Conference & Exhibition
%G eng
%0 Journal Article
%J *Accepted* in Geometry, Imaging and Computing
%D 2016
%T A Class of Nonlocal Variational Problems on a Vector Bundle for Color Image Local Contrast Reduction/Enhancement
%A Thomas Batard
%A Marcelo Bertalmío
%X We extend two existing variational models from the Euclidean space to a vector bundle over a Riemannian manifold. The Euclidean models, dedicated to regularize or enhance some color image features, are based on the concept of nonlocal gradient operator acting on a function of the Euclidean space. We then extend these models by generalizing this operator to a vector bundle over a Riemannian manifold with the help of the parallel transport map associated to some class of covariant derivatives. Through the dual formulations of the proposed models, we obtain the expressions of their solutions, which exhibit the functional spaces that describe the image features. Finally, for a well-chosen covariant derivative and its nonlocal extension, the proposed models perform local contrast modification (reduction or enhancement) and experiments show that they preserve more the aspect of the original image than the Euclidean models do while modifying equally its contrast.

%B *Accepted* in Geometry, Imaging and Computing
%G eng
%0 Conference Paper
%B IS&T Electronic Imaging Conference
%D 2016
%T Connections between Retinex, neural models and variational methods
%A Marcelo Bertalmío
%X This paper explores the interrelations between Retinex, neural models and variational methods by making an overview of relevant related works in the past few years. Taking all the essential elements of the Retinex theory as postulated by Land and McCann (channel independence, the ratio reset mechanism, local averages, non-linear correction), it has been shown that we can obtain a Retinex algorithm implementation that is intrinsically 2D, whose results comply with all the expected properties of the original, one-dimensional path-based Retinex algorithm (such as approximating color constancy while being unable to deal with overexposed images) but don’t suffer from common shortcomings such as sensitivity to noise, appearance of halos, etc. An iterative application of this 2D Retinex algorithm takes the form of a partial differential equation (PDE) that it’s proven not to minimize any energy functional, and this fact is linked to its limitations regarding over-exposed pictures. It was proposed to modify in this regard the iterative method in a certain way so that it is able to handle both under and over-exposed images, the resulting PDE now has a number of very relevant properties which allow to connect Retinex with variational models, histogram equalization and efficient coding, perceptual color correction algorithms, and computational neuroscience models of cortical activity and retinal models.

%B IS&T Electronic Imaging Conference
%G eng
%0 Chart or Table
%D 2016
%T Correcting for Induction Phenomena on Displays of Differrent Size
%A Marcelo Bertalmío
%A Thomas Batard
%A Jihyun Kim
%B Vision Sciences Society Annual Meeting
%G eng
%U http://f1000research.com/posters/5-1215
%0 Journal Article
%J IEEE Transactions on Image Processing
%D 2014
%T Color Stabilization Along Time and Across Shots of the Same Scene, for One or Several Cameras of Unknown Specifications
%A Javier Vazquez-Corral
%A Marcelo Bertalmío
%X We propose a method for color stabilization of shots of the same scene, taken under the same illumination, where one image is chosen as reference and one or several other images are modified so that their colors match those of the reference. We make use of two crucial but often overlooked observations: firstly, that the core of the color correction chain in a digital camera is simply a multiplication by a 3x3 matrix; secondly, that to color-match a source image to a reference image we don’t need to compute their two color correction matrices, it’s enough to compute the operation that transforms one matrix into the other. This operation is a 3x3 matrix as well, which we call H. Once we have H, we just multiply by it each pixel value of the source and obtain an image which matches in color the reference. To compute H we only require a set of pixel correspondences, we don’t need any information about the cameras used, neither models nor specifications or parameter values. We propose an implementation of our framework which is very simple and fast, and show how it can be successfully employed in a number of situations, comparing favourably with the state of the art. There is a wide range of applications of our technique, both for amateur and professional photography and video: color matching for multi-camera TV broadcasts, color matching for 3D cinema, color stabilization for amateur video, etc.

%B IEEE Transactions on Image Processing %G eng %R 10.1109/TIP.2014.2344312 %0 Conference Paper %B Color and Imaging Conference %D 2014 %T Considering saliency in a perception inspired gamut reduction algorithm %A Javier Vazquez-Corral %A Syed.Waqas Zamir %A Marcelo Bertalmío %XGamut reduction transforms the colors of an input image within the range of a target device. A good gamut reduction algorithm will preserve the experience felt by the viewer of the original image. Saliency algorithms predict the image regions where an observer first focuses. Therefore, there exists a connection between both concepts since modifying the saliency of the image will modify the viewer’s experience. However, very little attention has been given to relate saliency and gamut mapping. In this paper we propose to modify a recent gamut reduction algorithm proposed by Zamir et al. [32] in order to better respect the saliency of the original image in the reproduced one. Our results show that the proposed approach presents a gamut-mapped image whose saliency map is closer to that of the original image with a minor loss in the accuracy of perceptual reproduction.

%B Color and Imaging Conference %G eng %U http://www.ingentaconnect.com/content/ist/cic/2014/00002014/00002014/art00006 %0 Journal Article %J SIAM Journal on Imaging Sciences (SIIMS) %D 2014 %T On Covariant Derivatives and Their Applications to Image Regularization %A Thomas Batard %A Marcelo Bertalmío %XWe present a generalization of the Euclidean and Riemannian gradient operators to a vector bundle, a geometric structure generalizing the concept of manifold. One of the key ideas is to replace the standard differentiation of a function by the covariant differentiation of a section. Dealing with covariant derivatives satisfying the property of compatibility with vector bundle metrics, we construct generalizations of existing mathematical models for image regularization that involve the Euclidean gradient operator, namely the linear scale-space and the Rudin-Osher-Fatemi denoising model. For well-chosen covariant derivatives, we show that our denoising model outperforms state-of-the-art variational denoising methods of the same type both in terms of PSNR and Q-index [45].

%B SIAM Journal on Imaging Sciences (SIIMS) %G eng %R 10.1137/140954039 %0 Journal Article %J SIAM Journal on Imaging Sciences (SIIMS) %D 2013 %T A Contrario Selection of Optimal Partitions for Image Segmentation %A Juan Cardelino %A Vicent Caselles %A Marcelo Bertalmío %A Gregory Randall %XWe present a novel segmentation algorithm based on a hierarchical representation of images. The main contribution of this work is to explore the capabilities of the a contrario reasoning when applied to the segmentation problem, and to overcome the limitations of current algorithms within that framework. This exploratory approach has three main goals.

Our first goal is to extend the search space of greedy merging algorithms to the set of all partitions spanned by a certain hierarchy, and to cast the segmentation as a selection problem within this space. In this way we increase the number of tested partitions and thus we potentially improve the segmentation results. In addition, this space is considerably smaller than the space of all possible partitions, thus we still keep the complexity controlled.

Our second goal aims to improve the locality of region merging algorithms, which usually merge pairs of neighboring regions. In this work, we overcome this limitation by introducing a validation procedure for complete partitions, rather than for pairs of regions. The third goal is to perform an exhaustive experimental evaluation methodology in order to provide reproducible results.

Finally, we embed the selection process on a statistical a contrario framework which allows us to have only one

free parameter related to the desired scale.