Date
18 mai 2016
[Reporté pour cause de mouvement social - grève des transports] Séminaire d'Olivier Strauss, Maitre de Conférences à l'Université Montpellier II et membre du LIRMM, le jeudi 19 mai à 14h en salle 207. Most digital image processing algorithms are formulated in the continuous domain. This formulation involves methods that ensure the interplay between the continuous domain, where the problem is expressed, and the discrete domain, where the algorithmic solution is computed. Usually, the interplay between the continuous and the discrete domain involves a point spread function, when the measurement model is supposed to be linear, while the interplay between the discrete and the continuous domain involves interpolation or more generally approximation methods. The point spread function can model the imperfections and characteristics of the measurement process itself but also undesirable effects degrading the quality of the image including blur, motion, out-of-focus, dust (astrophotography, submarine images, ...). Sometimes this modeling is perfectly relevant, but most of the times is a linear shift invariant approximation of a non linear phenomenon. Defining an appropriate method and the blurring kernel can be estimated prior to the deconvolution process Digital image processing refers to the set of algorithms used to transform, filter, enhance, modify, analyze, distort, fuse, etc., digital images. Most of these algorithms are designed to mimic an underlying physical operation defined in the continuous illumination domain and formerly achieved via optical or electronic filters or through manipulations, including painting, cutting, moving or pasting of image patches. It also allows more sophisticated transformations (associated to more or less complex algorithms) which would be impossible to process by analog means. It may be quite hard to completely transpose an operation from the continuous to the discrete domain. Such a transposition usually relies on methods that ensure a kind of interplay between continuous and discrete domains. The interplay between the continuous and the discrete domain usually involves a convolution with a point spread function, when the measurement model is supposed to be linear, while the interplay between the discrete and the continuous domain is ensured by interpolation or more generally approximation methods, which also involve a convolution with a reconstruction kernel. A point spread function can model the imperfections and characteristics of the measurement process itself but also undesirable effects degrading the quality of the image including blur, motion, out-of-focus, dust (astrophotography, submarine images, ...). Sometimes this modeling is perfectly relevant. Moreover, when the measurement device is available (the camera) it can be identified by procedures that involve a dedicated pattern. But, most of the time, the point spread function is unknown and cannot be precisely identified. Moreover, modeling the measurement process by a convolution should often be considered as an approximation of a more complex (and not shift-invariant) phenomenon. This is the case of radial distortion or chromatic aberrations. On the other hand, the choice of the method for reconstructing a continuous image is usually imposed by computational, noise reduction or practical purposes. Thus, adapting a operation defined in the continuous domain to the discrete domain usually involves many approximations and arbitrary choices that can have a high impact on the result. Moreover, this impact is usually unmeasured. One simple example is the affine transformation of an image. In this context, choosing an interpolation kernel (nearest neighbor, bi-linear, bi-cubic or others) leads to different information loss in the transformed image that makes the discrete transformation not reversible while the continuous transformation is. Another example is deblurring an image. This deblurring involves a precise knowledge on the blurring kernel. One of the main track to solve this problem is to achieve what is called a blind deconvolution, which is a challenging image processing problem, since many combinations of blur and image can produce the same observed image. A more rational position is to address a myopic deconvolution. Traditional myopic deconvolution assumes the shape of the blurring kernel to be partially know. If this approach is more suitable, it can lead to artifacted images due to the deviation between the "true kernel" and the used kernel. In our work, we propose a completely different approach. Instead of proposing THE perfect method for ensuring this continuous to discrete and discrete to continuous interplay, we propose a modeling of an imprecise knowledge of a kernel function. Our model can be perceived as a "box of kernels" i.e. a convex set of kernels. How to represent this imprecise knowledge? How to create it? How to build such a box? How to perform a convolution with this modeling? These are the subjects of this presentation. The talk will be illustrated by three applications: - image super-resolution, i.e. building a high resolution image with a set of low resolution images, - quantization of noise in emission tomography, - reversible (in a certain sense) affine transformations