论文标题

肖像图像的层次矢量化

Hierarchical Vectorization for Portrait Images

论文作者

Fu, Qian, Liu, Linlin, Hou, Fei, He, Ying

论文摘要

为了开发直观且易于使用的肖像编辑工具,我们提出了一种新颖的矢量化方法,该方法可以自动将栅格图像转换为3层层次结构表示。基础层由一组稀疏扩散曲线(DC)组成,该曲线表征了显着的几何特征和低频颜色,并提供了语义色传递和面部表达编辑的手段。中层层次对大型且可编辑的泊松区域(PR)编码镜面亮点和阴影,并允许用户通过调整PR的强度和/或变化的形状直接调节照明。顶级包含两种类型的像素大小的PR,用于高频残留物以及细节,例如丘疹和色素沉着。我们还训练一个深层生成模型,该模型可以自动产生高频残差。得益于有意义的向量原始人的组织,编辑肖像变得容易而直观。特别是,我们的方法支持色彩传递,面部表达编辑,高光和阴影编辑以及自动修饰。由于Laplace操作员的线性性,我们将Alpha混合,线性道奇和线性燃烧引入矢量编辑,并表明它们有效地编辑了亮点和阴影。为了定量评估结果,我们通过考虑照明扩展了常用的翻转度量(这两个图像之间的差异)。新的指标,称为照明敏感的翻转或fllip,可以有效地捕获色彩传递结果的显着变化,并且与人类感知相比,与肖像图像上的翻转和其他质量测量相比,它与人类的感知更一致。我们在FFHQR数据集上评估了我们的方法,并表明我们的方法对于通用肖像编辑任务有效,例如修饰,光编辑,色彩传递和表达编辑。我们将公开制作代码和训练有素的模型。

Aiming at developing intuitive and easy-to-use portrait editing tools, we propose a novel vectorization method that can automatically convert raster images into a 3-tier hierarchical representation. The base layer consists of a set of sparse diffusion curves (DC) which characterize salient geometric features and low-frequency colors and provide means for semantic color transfer and facial expression editing. The middle level encodes specular highlights and shadows to large and editable Poisson regions (PR) and allows the user to directly adjust illumination via tuning the strength and/or changing shape of PR. The top level contains two types of pixel-sized PRs for high-frequency residuals and fine details such as pimples and pigmentation. We also train a deep generative model that can produce high-frequency residuals automatically. Thanks to the meaningful organization of vector primitives, editing portraits becomes easy and intuitive. In particular, our method supports color transfer, facial expression editing, highlight and shadow editing and automatic retouching. Thanks to the linearity of the Laplace operator, we introduce alpha blending, linear dodge and linear burn to vector editing and show that they are effective in editing highlights and shadows. To quantitatively evaluate the results, we extend the commonly used FLIP metric (which measures differences between two images) by considering illumination. The new metric, called illumination-sensitive FLIP or IS-FLIP, can effectively capture the salient changes in color transfer results, and is more consistent with human perception than FLIP and other quality measures on portrait images. We evaluate our method on the FFHQR dataset and show that our method is effective for common portrait editing tasks, such as retouching, light editing, color transfer and expression editing. We will make the code and trained models publicly available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源