论文标题
分歧。比例不变差异。对线性问题的应用。 N.M.F.盲卷
Divergences. Scale invariant Divergences. Applications to linear inverse problems. N.M.F. Blind deconvolution
论文作者
论文摘要
本书涉及允许在两个数据字段或“差异函数”之间表达差异(差异)的功能,目的是将应用程序用于线性逆问题。 在信息理论领域中使用的大多数差异都用于量化两个概率密度函数之间的差异,即在总和等于一个的正数据之间。在这种情况下,他们采用的简化形式不适合这里考虑的问题,其中数据字段是非负的,但总和不一定等于一个。 我们以系统的方式重新考虑了经典的分歧,并给出了它们的形式,适合反问题。 为此,我们将回忆起允许构建此类差异并提出一些概括的方法。 反问题的分辨率意味着根据未知参数的物理测量和模型之间的差异最小化。 在上下文图像重建中,该模型通常是线性的,必须考虑的必须考虑的约束是非阴性,以及(如有必要)(如有必要)未知参数的总约束。 要以简单的方式考虑总和,我们介绍了规模不变或仿射不变差异的类别。当将模型参数乘以恒定的正因子时,此类差异保持不变。我们显示了不变性因素的一般属性,并给出了这种差异的一些有趣的特征。这种差异的扩展允许获得有关差异参数的不变性的属性;该特征可用于引入反问题的平滑性正则化,这是tikhonov的正规化。然后,我们在最后一步中发展,最小化差异方法的最小化方法受到非负性和对解决方案组件的总和约束。这些方法建立在必须以最佳状态实现的Karush-Kuhn-Tucker条件。在这些方法中考虑了Tikhonov的正则化。与附录9相关联的第11章与NMF的应用相关,而第12章则专门针对盲目的反卷积问题。在这两章中,规模不变差异的兴趣被突出显示。
This book deals with functions allowing to express the dissimilarity (discrepancy) between two data fields or ''divergence functions'' with the aim of applications to linear inverse problems. Most of the divergences found in the litterature are used in the field of information theory to quantify the difference between two probability density functions, that is between positive data whose sum is equal to one. In such context, they take a simplified form that is not adapted to the problems considered here, in which the data fields are non-negative but with a sum not necessarily equal to one. In a systematic way, we reconsider the classical divergences and we give their forms adapted to inverse problems. To this end, we will recall the methods allowing to build such divergences, and propose some generalizations. The resolution of inverse problems implies systematically the minimisation of a divergence between physical measurements and a model depending of the unknown parameters. In the context image reconstruction, the model is generally linear and the constraints that must be taken into account are the non-negativity as well as (if necessary) the sum constraint of the unknown parameters. To take into account in a simple way the sum constraint, we introduce the class of scale invariant or affine invariant divergences. Such divergences remains unchanged when the model parameters are multiplied by a constant positive factor. We show the general properties of the invariance factors, and we give some interesting characteristics of such divergences.An extension of such divergences allows to obtain the property of invariance with respect to both the arguments of the divergences; this characteristic can be used to introduce the smoothness regularization of inverse problems, that is a regularisation in the sense of Tikhonov.We then develop in a last step, minimisation methods of the divergences subject to non-negativity and sum constraints on the solution components. These methods are founded on the Karush-Kuhn-Tucker conditions that must be fulfilled at the optimum. The Tikhonov regularization is considered in these methods.Chapter 11 associated with Appendix 9 deal with the application to the NMF, while Chapter 12 is dedicated to the Blind Deconvolution problem.In these two chapters, the interest of the scale invariant divergences is highlighted.