论文标题
预处理数据同化问题的Hessian条件数的新界限
New bounds on the condition number of the Hessian of the preconditioned variational data assimilation problem
论文作者
论文摘要
数据同化算法结合了由其各自的不确定性加权的先验和观察信息,以获得动力学系统的最可能后验。在各种数据同化中,通过解决非线性最小二乘问题来计算后验。许多数值天气预测(NWP)中心都使用完整的观察误差协方差(OEC)加权矩阵,这可以减慢数据同化过程的收敛性。先前的工作揭示了OEC矩阵的最低特征值对未经原则的数据同化问题的调理和收敛的重要性。在本文中,我们首次研究了在预处理数据同化问题中使用相关的OEC矩阵的使用。我们考虑的是,与观测相比,状态变量更多的情况,这对于具有稀疏测量的应用是典型的,例如NWP和遥感。我们发现,与未经根据的问题类似,OEC矩阵的最小特征值以新的范围出现在预处理目标函数的Hessian的条件数上。数值实验表明,当背景和观察长度相等时,Hessian的条件数量最小化。这与未经本土的情况形成对比,在该情况下,降低观察误差长度始终可以改善调理。共轭梯度实验表明,在此框架中,Hessian的条件数是收敛的良好代理。特征值聚类解释了收敛速度快于预期的情况。
Data assimilation algorithms combine prior and observational information, weighted by their respective uncertainties, to obtain the most likely posterior of a dynamical system. In variational data assimilation the posterior is computed by solving a nonlinear least squares problem. Many numerical weather prediction (NWP) centres use full observation error covariance (OEC) weighting matrices, which can slow convergence of the data assimilation procedure. Previous work revealed the importance of the minimum eigenvalue of the OEC matrix for conditioning and convergence of the unpreconditioned data assimilation problem. In this paper we examine the use of correlated OEC matrices in the preconditioned data assimilation problem for the first time. We consider the case where there are more state variables than observations, which is typical for applications with sparse measurements e.g. NWP and remote sensing. We find that similarly to the unpreconditioned problem, the minimum eigenvalue of the OEC matrix appears in new bounds on the condition number of the Hessian of the preconditioned objective function. Numerical experiments reveal that the condition number of the Hessian is minimised when the background and observation lengthscales are equal. This contrasts with the unpreconditioned case, where decreasing the observation error lengthscale always improves conditioning. Conjugate gradient experiments show that in this framework the condition number of the Hessian is a good proxy for convergence. Eigenvalue clustering explains cases where convergence is faster than expected.