Gamut mapping transforms the color of an input image within the range of a target device. A huge amount of research has been devoted to two subproblems that arise from this general one: gamut reduction and gamut extension. Gamut reduction algorithms convert the input image to a new gamut that fits inside the one of the image , i.e. the gamuts’ intersection is equal to the target gamut, while gamut extension algorithms convert the input image to a gamut that embodies the original image gamut, i.e. the gamuts’ intersection is equal to the source gamut. In contrast to the two aforementioned cases, very little attention has been paid to the most general problem, where the intersection of source and target gamut is not equal to one of the two gamuts. In this paper we address this most general problem of gamut mapping between any two gamuts presenting any possible intersection. To deal with this problem we unify the gamut extension and gamut reduction algorithms presented in Zamir –et al- (Zamir 2014), which are based in the perceptually inspired variational framework of Bertalmío –et al- (Bertalmío 2007) that presents three competing terms; an attachment to the original data, a term for not-modifying the per- channel image mean (i.e. not modifying the white point), and a contrast enhancement term. In particular, in this paper we show how by defining a smooth transition on the contrast enhancement parameter over the chromaticity diagram we can simultaneously reduce the input gamut in some chromatic areas while increasing it in some other without introducing neither color artifacts nor halos.

%B AIC Midterm Meeting %G eng