In this paper, we establish a connection between image processing, visual perception, and deep learning by introducing a mathematical model inspired by visual perception from which neural network layers and image processing models for color correction can be derived. Our model is inspired by the geometry of visual perception and couples a geometric model for the organization of some neurons in the visual cortex with a geometric model of color perception. More precisely, the model is a combination of a Wilson-Cowan equation describing the activity of neurons responding to edges and textures in the area V1 of the visual cortex and a Retinex model of color vision. For some particular activation functions, this yields a color correction model which processes simultaneously edges/textures, encoded into a Riemannian metric, and the color contrast, encoded into a nonlocal covariant derivative. Then, we show that the proposed model can be assimilated to a residual layer provided that the activation function is nonlinear and to a convolutional layer for a linear activation function. Finally, we show the accuracy of the model for deep learning by testing it on the MNIST dataset for digit classication.

JF - Scale Space and Variational Methods in Computer Vision (SSVM2019) ER - TY - JOUR T1 - A Class of Nonlocal Variational Problems on a Vector Bundle for Color Image Local Contrast Reduction/Enhancement JF - *Accepted* in Geometry, Imaging and Computing Y1 - 2016 A1 - Thomas Batard A1 - Marcelo Bertalmío AB - We extend two existing variational models from the Euclidean space to a vector bundle over a Riemannian manifold. The Euclidean models, dedicated to regularize or enhance some color image features, are based on the concept of nonlocal gradient operator acting on a function of the Euclidean space. We then extend these models by generalizing this operator to a vector bundle over a Riemannian manifold with the help of the parallel transport map associated to some class of covariant derivatives. Through the dual formulations of the proposed models, we obtain the expressions of their solutions, which exhibit the functional spaces that describe the image features. Finally, for a well-chosen covariant derivative and its nonlocal extension, the proposed models perform local contrast modification (reduction or enhancement) and experiments show that they preserve more the aspect of the original image than the Euclidean models do while modifying equally its contrast.

ER -
TY - Generic
T1 - Correcting for Induction Phenomena on Displays of Differrent Size
Y1 - 2016
A1 - Marcelo Bertalmío
A1 - Thomas Batard
A1 - Jihyun Kim
JF - Vision Sciences Society Annual Meeting
UR - http://f1000research.com/posters/5-1215
ER -
TY - JOUR
T1 - On Covariant Derivatives and Their Applications to Image Regularization
JF - SIAM Journal on Imaging Sciences (SIIMS)
Y1 - 2014
A1 - Thomas Batard
A1 - Marcelo Bertalmío
AB - We present a generalization of the Euclidean and Riemannian gradient operators to a vector bundle, a geometric structure generalizing the concept of manifold. One of the key ideas is to replace the standard differentiation of a function by the covariant differentiation of a section. Dealing with covariant derivatives satisfying the property of compatibility with vector bundle metrics, we construct generalizations of existing mathematical models for image regularization that involve the Euclidean gradient operator, namely the linear scale-space and the Rudin-Osher-Fatemi denoising model. For well-chosen covariant derivatives, we show that our denoising model outperforms state-of-the-art variational denoising methods of the same type both in terms of PSNR and Q-index [45].

ER -