By capturing a more complete rendition of scene light than standard 2D cameras, light-field technology represents an important step towards closing the gap between live action cinematography and computer graphics. Light-field cameras accomplish this by simultaneously capturing the same scene under different angular configurations, providing directional information that allows for a multitude of post-production effects. Among the practical challenges related to capturing multiple images simultaneously, a very important problem is the fact that the different images do not perfectly match in terms of color, which severely complicates all further processing. In this work we adapt and extend to the light-field scenario a color stabilization method previously proposed for standard multi-camera shoots, and demonstrate experimentally that it provides an improvement over the state-of-the-art techniques for light-field imaging.

1 aThanh, Olivier, Vu1 aCanham, Trevor1 aVazquez-Corral, Javier1 aRodríguez, Raquel, Gil1 aBertalmío, Marcelo uhttp://ip4ec.upf.edu/colorStabilizationMultiCamera01690nas a2200145 4500008004100000245010600041210006900147520116400216100002401380700002001404700002601424700003001450700001801480856004601498 2020 eng d00aCortical-inspired Wilson-Cowan-type equations for orientation-dependent contrast perception modelling0 aCorticalinspired WilsonCowantype equations for orientationdepend3 aWe consider the evolution model proposed in [9, 6] to describe illusory contrast perception phenomena induced by surrounding orientations. Firstly, we highlight its analogies and differences with widely used Wilson-Cowan equations [48], mainly in terms of efficient representation properties. Then, in order to explicitly encode local directional information, we exploit the model of the primary visual cortex V1 proposed in [20] and largely used over the last years for several image processing problems [24,38,28]. The resulting model is capable to describe assimilation and contrast visual bias at the same time, the main novelty being its explicit dependence on local image orientation. We report several numerical tests showing the ability of the model to explain, in particular, orientation-dependent phenomena such as grating induction and a modified version of the Poggendorff illusion. For this latter example, we empirically show the existence of a set of threshold parameters differentiating from inpainting to perception-type reconstructions, describing long-range connectivity between different hypercolumns in the primary visual cortex.

1 aBertalmío, Marcelo1 aCalatroni, Luca1 aFranceschi, Valentina1 aFranceschiello, Benedetta1 aPrandi, Dario uhttp://ip4ec.upf.edu/wcContrastPerception01739nas a2200133 4500008004100000245012700041210006900168520122500237100001901462700002901481700002101510700002401531856005001555 2019 eng d00aA connection between image processing and artificial neural networks layers through a geometric model of visual perception0 aconnection between image processing and artificial neural networ3 aIn this paper, we establish a connection between image processing, visual perception, and deep learning by introducing a mathematical model inspired by visual perception from which neural network layers and image processing models for color correction can be derived. Our model is inspired by the geometry of visual perception and couples a geometric model for the organization of some neurons in the visual cortex with a geometric model of color perception. More precisely, the model is a combination of a Wilson-Cowan equation describing the activity of neurons responding to edges and textures in the area V1 of the visual cortex and a Retinex model of color vision. For some particular activation functions, this yields a color correction model which processes simultaneously edges/textures, encoded into a Riemannian metric, and the color contrast, encoded into a nonlocal covariant derivative. Then, we show that the proposed model can be assimilated to a residual layer provided that the activation function is nonlinear and to a convolutional layer for a linear activation function. Finally, we show the accuracy of the model for deep learning by testing it on the MNIST dataset for digit classication.

1 aBatard, Thomas1 aMaldonado, Eduard, Ramon1 aSteidl, Gabriele1 aBertalmío, Marcelo uhttp://ip4ec.upf.edu/geometricModelPerception01657nas a2200133 4500008004100000245006300041210006300104520121900167100002701386700002101413700002701434700002401461856003801485 2019 eng d00aConvolutional Neural Networks Deceived by Visual Illusions0 aConvolutional Neural Networks Deceived by Visual Illusions3 aVisual illusions teach us that what we see is not always what it is represented in the physical world. Its special nature make them a fascinating tool to test and validate any new vision model proposed. In general, current vision models are based on the concatenation of linear convolutions and non-linear operations. In this paper we get inspiration from the similarity of this structure with the operations present in Convolutional Neural Networks (CNNs). This motivated us to study if CNNs trained for low-level visual tasks are deceived by visual illusions. In particular, we show that CNNs trained for image denoising, image deblurring, and computational color constancy are able to replicate the human response to visual illusions, and that the extent of this replication varies with respect to variation in architecture and spatial pattern size. We believe that this CNNs behaviour appears as a by-product of the training for the low level vision tasks of denoising, color constancy or deblurring. Our work opens a new bridge between human perception and CNNs: in order to obtain CNNs that better replicate human behaviour, we may need to start aiming for them to better replicate visual illusions.

1 aGomez-Villa, Alexander1 aMartín, Adrián1 aVazquez-Corral, Javier1 aBertalmío, Marcelo uhttp://ip4ec.upf.edu/CNNillusions01365nas a2200145 4500008004100000245011200041210006900153520082400222100002401046700002001070700002601090700003001116700001801146856005501164 2019 eng d00aA cortical-inspired model for orientation-dependent contrast perception: a link with Wilson-Cowan equations0 acorticalinspired model for orientationdependent contrast percept3 aWe consider a differential model describing neuro-physiological contrast perception phenomena induced by surrounding orientations. The mathematical formulation relies on a cortical-inspired modelling largely used over the last years to describe neuron interactions in the primary visual cortex (V1) and applied to several image processing problems. Our model connects to Wilson-Cowan-type equations and it is analogous to the one used in to describe assimilation and contrast phenomena, the main novelty being its explicit dependence on local image orientation. To conrm the validity of the model, we report some numerical tests showing its ability to explain orientation-dependent phenomena (such as grating induction) and geometric-optical illusions classically explained only by ltering-based techniques.

1 aBertalmío, Marcelo1 aCalatroni, Luca1 aFranceschi, Valentina1 aFranceschiello, Benedetta1 aPrandi, Dario uhttp://ip4ec.upf.edu/ContrastPerceptionWilsonCowan01217nas a2200121 4500008004100000245006000041210005900101520080700160100002800967700002700995700002401022856004901046 2018 eng d00aColor matching images with unknown non-linear encodings0 aColor matching images with unknown nonlinear encodings3 aWe present a color matching method that, given two different views of the same scene taken by two cameras with unknown settings and unknown internal parameter values, and encoded with unknown non-linear curves, is able to correct the colors of one of the images making it look as if it was captured under the other camera’s settings. Our method is based on treating the in-camera color processing pipeline as a matrix multiplication followed by a non-linearity. This allows us to model a color stabilization transformation among the two shots by estimating several parameters. The method is fast and the results have no spurious colors. It outperforms the state-of-the-art both visually and according to several metrics, and can handle very challenging real-life examples.

In cinema and TV it is quite usual to have to work with footage coming from several cameras, which show noticeable color differences among them even if they are all the same model. In TV broadcasts, technicians work in camera control units so as to ensure color consistency when cutting from one camera to another. In cinema post-production, colorists need to manually colormatch images coming from different sources. Aiming to help perform this task automatically, the Academy Color Encoding System (ACES) introduced a color management framework to work within the same color space and be able to use different cameras and displays; however, the ACES pipeline requires to have the cameras characterized previously, and therefore does not allow to work ‘in the wild’, a situation which is very common. We present a color stabilization method that, given two images of the same scene taken by two cameras with unknown settings and unknown internal parameter values, and encoded with unknown non-linear curves (logarithmic or gamma), is able to correct the colors of one of the images making it look as if it was captured with the other camera. Our method is based on treating the in-camera color processing pipeline as a combination of a 3x3 matrix followed by a non-linearity, which allows us to model a color stabilization transformation among two shots as a linear-nonlinear function with several parameters. We find corresponding points between the two images, compute the error (color difference) over them, and determine the transformation parameters that minimize this error, all automatically without any user input. The method is fast and the results have no spurious colors or spatio-temporal artifacts of any kind. It outperforms the state of the art both visually and according to several metrics, and can handle very challenging real-life examples.

1 aRodríguez, Raquel, Gil1 aVazquez-Corral, Javier1 aBertalmío, Marcelo uhttp://ip4ec.upf.edu/ColorConsistency01442nas a2200109 4500008004100000245011700041210006900158520101000227100001901237700002401256856005201280 2016 eng d00aA Class of Nonlocal Variational Problems on a Vector Bundle for Color Image Local Contrast Reduction/Enhancement0 aClass of Nonlocal Variational Problems on a Vector Bundle for Co3 a
We extend two existing variational models from the Euclidean space to a vector bundle over a Riemannian manifold. The Euclidean models, dedicated to regularize or enhance some color image features, are based on the concept of nonlocal gradient operator acting on a function of the Euclidean space. We then extend these models by generalizing this operator to a vector bundle over a Riemannian manifold with the help of the parallel transport map associated to some class of covariant derivatives. Through the dual formulations of the proposed models, we obtain the expressions of their solutions, which exhibit the functional spaces that describe the image features. Finally, for a well-chosen covariant derivative and its nonlocal extension, the proposed models perform local contrast modification (reduction or enhancement) and experiments show that they preserve more the aspect of the original image than the Euclidean models do while modifying equally its contrast.

1 aBatard, Thomas1 aBertalmío, Marcelo uhttp://ip4ec.upf.edu/VariationalProblemsGIC201601836nas a2200097 4500008004100000245007100041210006900112520146700181100002401648856006601672 2016 eng d00aConnections between Retinex, neural models and variational methods0 aConnections between Retinex neural models and variational method3 a
This paper explores the interrelations between Retinex, neural models and variational methods by making an overview of relevant related works in the past few years. Taking all the essential elements of the Retinex theory as postulated by Land and McCann (channel independence, the ratio reset mechanism, local averages, non-linear correction), it has been shown that we can obtain a Retinex algorithm implementation that is intrinsically 2D, whose results comply with all the expected properties of the original, one-dimensional path-based Retinex algorithm (such as approximating color constancy while being unable to deal with overexposed images) but don’t suffer from common shortcomings such as sensitivity to noise, appearance of halos, etc. An iterative application of this 2D Retinex algorithm takes the form of a partial differential equation (PDE) that it’s proven not to minimize any energy functional, and this fact is linked to its limitations regarding over-exposed pictures. It was proposed to modify in this regard the iterative method in a certain way so that it is able to handle both under and over-exposed images, the resulting PDE now has a number of very relevant properties which allow to connect Retinex with variational models, histogram equalization and efficient coding, perceptual color correction algorithms, and computational neuroscience models of cortical activity and retinal models.

1 aBertalmío, Marcelo uhttp://ip4ec.upf.edu/RetinexNeuralModelsAndVariationalMethods00393nas a2200109 4500008004100000245007000041210006900111100002400180700001900204700001600223856004400239 2016 eng d00aCorrecting for Induction Phenomena on Displays of Differrent Size0 aCorrecting for Induction Phenomena on Displays of Differrent Siz1 aBertalmío, Marcelo1 aBatard, Thomas1 aKim, Jihyun uhttp://f1000research.com/posters/5-121501907nas a2200109 4500008004100000245012400041210006900165520146500234100002701699700002401726856004701750 2014 eng d00aColor Stabilization Along Time and Across Shots of the Same Scene, for One or Several Cameras of Unknown Specifications0 aColor Stabilization Along Time and Across Shots of the Same Scen3 aWe propose a method for color stabilization of shots of the same scene, taken under the same illumination, where one image is chosen as reference and one or several other images are modified so that their colors match those of the reference. We make use of two crucial but often overlooked observations: firstly, that the core of the color correction chain in a digital camera is simply a multiplication by a 3x3 matrix; secondly, that to color-match a source image to a reference image we don’t need to compute their two color correction matrices, it’s enough to compute the operation that transforms one matrix into the other. This operation is a 3x3 matrix as well, which we call H. Once we have H, we just multiply by it each pixel value of the source and obtain an image which matches in color the reference. To compute H we only require a set of pixel correspondences, we don’t need any information about the cameras used, neither models nor specifications or parameter values. We propose an implementation of our framework which is very simple and fast, and show how it can be successfully employed in a number of situations, comparing favourably with the state of the art. There is a wide range of applications of our technique, both for amateur and professional photography and video: color matching for multi-camera TV broadcasts, color matching for 3D cinema, color stabilization for amateur video, etc.

1 aVazquez-Corral, Javier1 aBertalmío, Marcelo uhttp://ip4ec.upf.edu/ColorStabilizationTIP01360nas a2200121 4500008004100000245007600041210006900117520089700186100002701083700002201110700002401132856008201156 2014 eng d00aConsidering saliency in a perception inspired gamut reduction algorithm0 aConsidering saliency in a perception inspired gamut reduction al3 aGamut reduction transforms the colors of an input image within the range of a target device. A good gamut reduction algorithm will preserve the experience felt by the viewer of the original image. Saliency algorithms predict the image regions where an observer first focuses. Therefore, there exists a connection between both concepts since modifying the saliency of the image will modify the viewer’s experience. However, very little attention has been given to relate saliency and gamut mapping. In this paper we propose to modify a recent gamut reduction algorithm proposed by Zamir et al. [32] in order to better respect the saliency of the original image in the reproduced one. Our results show that the proposed approach presents a gamut-mapped image whose saliency map is closer to that of the original image with a minor loss in the accuracy of perceptual reproduction.

1 aVazquez-Corral, Javier1 aZamir, Syed.Waqas1 aBertalmío, Marcelo uhttp://www.ingentaconnect.com/content/ist/cic/2014/00002014/00002014/art0000601198nas a2200109 4500008004100000245007600041210006900117520081400186100001901000700002401019856004501043 2014 eng d00aOn Covariant Derivatives and Their Applications to Image Regularization0 aCovariant Derivatives and Their Applications to Image Regulariza3 aWe present a generalization of the Euclidean and Riemannian gradient operators to a vector bundle, a geometric structure generalizing the concept of manifold. One of the key ideas is to replace the standard differentiation of a function by the covariant differentiation of a section. Dealing with covariant derivatives satisfying the property of compatibility with vector bundle metrics, we construct generalizations of existing mathematical models for image regularization that involve the Euclidean gradient operator, namely the linear scale-space and the Rudin-Osher-Fatemi denoising model. For well-chosen covariant derivatives, we show that our denoising model outperforms state-of-the-art variational denoising methods of the same type both in terms of PSNR and Q-index [45].

1 aBatard, Thomas1 aBertalmío, Marcelo uhttp://ip4ec.upf.edu/ImageRegularization01856nas a2200133 4500008004100000245007100041210006900112520142200181100002001603700002101623700002401644700002101668856003301689 2013 eng d00aA Contrario Selection of Optimal Partitions for Image Segmentation0 aContrario Selection of Optimal Partitions for Image Segmentation3 aWe present a novel segmentation algorithm based on a hierarchical representation of images. The main contribution of this work is to explore the capabilities of the a contrario reasoning when applied to the segmentation problem, and to overcome the limitations of current algorithms within that framework. This exploratory approach has three main goals.

Our first goal is to extend the search space of greedy merging algorithms to the set of all partitions spanned by a certain hierarchy, and to cast the segmentation as a selection problem within this space. In this way we increase the number of tested partitions and thus we potentially improve the segmentation results. In addition, this space is considerably smaller than the space of all possible partitions, thus we still keep the complexity controlled.

Our second goal aims to improve the locality of region merging algorithms, which usually merge pairs of neighboring regions. In this work, we overcome this limitation by introducing a validation procedure for complete partitions, rather than for pairs of regions. The third goal is to perform an exhaustive experimental evaluation methodology in order to provide reproducible results.

Finally, we embed the selection process on a statistical a contrario framework which allows us to have only one

free parameter related to the desired scale.