论文标题
双空耦合模型引导的无重叠散点图
Dual Space Coupling Model Guided Overlap-Free Scatterplot
论文作者
论文摘要
散点图的透支问题严重干扰了视觉任务。现有的方法,例如数据采样,节点分散,子空间映射和视觉抽象,无法保证反映了反映固有的原始数据分布的数据点和一致性,这些数据点以及揭示了所揭示的数据分布的相应的视觉单元,因此无法获得无偏见的无偏和失误数据分发的无重叠的散点图。本文提出了双空耦合模型,以在理论和分析上表示数据空间与视觉空间之间的复杂双边关系。在模型的指导下,通过集成以下内容开发了一种无重叠的散点图方法:基于几何的数据转换算法,即Distribution Transcriptor;有效的空间相互排斥的指导视图转化算法,即极地包装;无重叠的视觉编码配置模型和半径调整工具,即$ f_ {r_ {draw}} $。我们的方法可以确保两个空间之间的完整,准确的信息传输,从而在全球和本地功能上保持新创建的散点图与原始数据分布之间的一致性。与最先进的方法相比,定量评估证明了我们在计算效率方面的显着进步。涉及增强模式,相互作用改进和透支轨迹可视化的三种应用证明了我们方法的广泛前景。
The overdraw problem of scatterplots seriously interferes with the visual tasks. Existing methods, such as data sampling, node dispersion, subspace mapping, and visual abstraction, cannot guarantee the correspondence and consistency between the data points that reflect the intrinsic original data distribution and the corresponding visual units that reveal the presented data distribution, thus failing to obtain an overlap-free scatterplot with unbiased and lossless data distribution. A dual space coupling model is proposed in this paper to represent the complex bilateral relationship between data space and visual space theoretically and analytically. Under the guidance of the model, an overlap-free scatterplot method is developed through integration of the following: a geometry-based data transformation algorithm, namely DistributionTranscriptor; an efficient spatial mutual exclusion guided view transformation algorithm, namely PolarPacking; an overlap-free oriented visual encoding configuration model and a radius adjustment tool, namely $f_{r_{draw}}$. Our method can ensure complete and accurate information transfer between the two spaces, maintaining consistency between the newly created scatterplot and the original data distribution on global and local features. Quantitative evaluation proves our remarkable progress on computational efficiency compared with the state-of-the-art methods. Three applications involving pattern enhancement, interaction improvement, and overdraw mitigation of trajectory visualization demonstrate the broad prospects of our method.