论文标题
人类上半身的物理上合理的动画
Physically Plausible Animation of Human Upper Body from a Single Image
论文作者
论文摘要
我们提出了一种新方法,用于生成可控制的,动态的响应式和逼真的人类动画。给定一个人的图像,我们的系统允许用户使用图像空间中的相互作用(例如将手拖到各个位置)生成物理上合理的上身动画(PUBA)。我们制定了一个强化学习问题,以训练一个动态模型,该模型可以预测以3D动作(即关节扭矩)为条件的人的下一个2D状态(即图像上的关键点),以及一项输出最佳行动以控制人员以实现所需目标的策略。动态模型利用了3D模拟的表现力和2D视频的视觉现实主义。 PUBA生成2D关键点序列,以实现任务目标,同时对有力的扰动做出反应。然后通过姿势到图像发生器翻译关键点的序列,以产生最终的逼真的视频。
We present a new method for generating controllable, dynamically responsive, and photorealistic human animations. Given an image of a person, our system allows the user to generate Physically plausible Upper Body Animation (PUBA) using interaction in the image space, such as dragging their hand to various locations. We formulate a reinforcement learning problem to train a dynamic model that predicts the person's next 2D state (i.e., keypoints on the image) conditioned on a 3D action (i.e., joint torque), and a policy that outputs optimal actions to control the person to achieve desired goals. The dynamic model leverages the expressiveness of 3D simulation and the visual realism of 2D videos. PUBA generates 2D keypoint sequences that achieve task goals while being responsive to forceful perturbation. The sequences of keypoints are then translated by a pose-to-image generator to produce the final photorealistic video.