Derivatives and Inverse of Cascaded Linear+Nonlinear Neural Models

TitleDerivatives and Inverse of Cascaded Linear+Nonlinear Neural Models
Publication TypeJournal Article
Year of Publication2017
AuthorsMartinez-García M, Cyriac P, Batard T, Bertalmío M, Malo J
JournalPLOS ONE
Abstract
In vision science, cascades of Linear+Nonlinear transforms are very successful in modeling a number of perceptual experiences [1]. However, the conventional literature is usually too focused on only describing the forward input-output transform. Instead, in this work we present the mathematics of such cascades beyond the forward transform, namely the Jacobian matrices and the inverse. These analytic results are important for three reasons: (a) they are strictly necessary in new experimental methods based on the synthesis of visual stimuli with interesting geometrical properties, (b) they are convenient to learn the model from classical experiments or alternative goal optimization, and (c) they are a promising model-based alternative to blind machine-learning methods for neural decoding. Moreover, the statistical properties of the neural model are more intuitive by using this kind of vector formulation. The theory is checked by building and testing a vision model that actually follows the modular program suggested in [1]. Our derivable and invertible model consists of a cascade of modules that account for brightness, contrast, energy masking, and wavelet masking. To stress the generality of this modular setting we show examples where some of the canonical Divisive Normalization modules are substituted by equivalent modules such as the Wilson-Cowan interaction model [2, 3] (at the V1 cortex) or a tone-mapping model [4] (at the retina). In the Discussion we address three illustrative applications. First, we show how the Jacobian (w.r.t. the input) plays a major role in setting the model by allowing novel psychophysics based on the geometry of the neural representation (as in [5]). Second, we show how the Jacobian (w.r.t. the parameters) can be used to find the model that better reproduces classical psychophysics of image distortion. In fact, thanks to the presented derivatives, this cascade of isomorphic canonical modules has been psychophysically tuned to work together for the first time. Third, we show how the analytic inverse may improve regression-based visual brain decoding.
 
URLhttps://arxiv.org/abs/1711.00526