论文标题

我在哪里看?通过跨视图匹配的联合位置和方向估算

Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching

论文作者

Shi, Yujiao, Yu, Xin, Campbell, Dylan, Li, Hongdong

论文摘要

跨视图地理位置定位是估计摄像机在地面上的位置和方向(纬度,经度和方位角)的问题,鉴于大型地理标签的空中(例如卫星)图像的大规模数据库。现有方法通过学习判别特征描述符,但忽视方向对齐方式将任务视为纯粹的位置估计问题。众所周知,了解地面图像和空中图像之间的方向可以显着降低这两种视图之间的匹配歧义,尤其是当地面图像具有有限的视野(FOV)而不是完整的视野全景全景时。因此,我们设计了一个动态相似性匹配网络,以估计本地化过程中跨视图对齐。特别是,我们通过将极性变换应用于空中图像,以将图像与未知的方位角相结合,以解决跨视界域间隙。然后,使用两个流的卷积网络来从地面和极地转换的空中图像中学习深度特征。最后,我们通过计算横向特征之间的相关性来获得方向,这也提供了更准确的特征相似性度量,从而改善了位置回忆。标准数据集的实验表明,我们的方法可显着提高最先进的性能。值得注意的是,我们将CVUSA数据集的TOP-1位置召回率提高了具有已知取向的全景的1.5倍,对于未知方向的全景图,倍数为3.3倍,而180度FOV FOV图像的倍数为6倍。

Cross-view geo-localization is the problem of estimating the position and orientation (latitude, longitude and azimuth angle) of a camera at ground level given a large-scale database of geo-tagged aerial (e.g., satellite) images. Existing approaches treat the task as a pure location estimation problem by learning discriminative feature descriptors, but neglect orientation alignment. It is well-recognized that knowing the orientation between ground and aerial images can significantly reduce matching ambiguity between these two views, especially when the ground-level images have a limited Field of View (FoV) instead of a full field-of-view panorama. Therefore, we design a Dynamic Similarity Matching network to estimate cross-view orientation alignment during localization. In particular, we address the cross-view domain gap by applying a polar transform to the aerial images to approximately align the images up to an unknown azimuth angle. Then, a two-stream convolutional network is used to learn deep features from the ground and polar-transformed aerial images. Finally, we obtain the orientation by computing the correlation between cross-view features, which also provides a more accurate measure of feature similarity, improving location recall. Experiments on standard datasets demonstrate that our method significantly improves state-of-the-art performance. Remarkably, we improve the top-1 location recall rate on the CVUSA dataset by a factor of 1.5x for panoramas with known orientation, by a factor of 3.3x for panoramas with unknown orientation, and by a factor of 6x for 180-degree FoV images with unknown orientation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源