论文标题
负相关极限学习机器的全球融合
Global convergence of Negative Correlation Extreme Learning Machine
论文作者
论文摘要
在极限学习机(ELM)文献中引入的集合方法主要来自依赖数据采样程序的方法,因为训练数据是异质的,足以建立多样化的基础学习者。为了克服这一假设,提出了一种基于负相关学习(NCL)框架的ELM集合方法,称为负相关极限学习机(NCELM)。该模型分为两个阶段工作:i)在隐藏层中以随机权重为基础学习者,而ii)ii)在每个ELM最小化问题中引入了带有集合预测信息的NCL惩罚术语,更新基础学习者,iii)第二步是迭代的,直到整体收敛为止。 尽管该NCL合奏方法通过具有多个基准数据集的实验研究验证,但没有提供有关此收敛条件的信息。本文在数学上提出了足够的条件,以确保NCELM的全球收敛性。每次迭代中集合的更新被定义为一个收缩映射函数,通过Banach定理,合奏的全局收敛。
Ensemble approaches introduced in the Extreme Learning Machine (ELM) literature mainly come from methods that relies on data sampling procedures, under the assumption that the training data are heterogeneously enough to set up diverse base learners. To overcome this assumption, it was proposed an ELM ensemble method based on the Negative Correlation Learning (NCL) framework, called Negative Correlation Extreme Learning Machine (NCELM). This model works in two stages: i) different ELMs are generated as base learners with random weights in the hidden layer, and ii) a NCL penalty term with the information of the ensemble prediction is introduced in each ELM minimization problem, updating the base learners, iii) second step is iterated until the ensemble converges. Although this NCL ensemble method was validated by an experimental study with multiple benchmark datasets, no information was given on the conditions about this convergence. This paper mathematically presents the sufficient conditions to guarantee the global convergence of NCELM. The update of the ensemble in each iteration is defined as a contraction mapping function, and through Banach theorem, global convergence of the ensemble is proved.