论文标题

梯度增强神经网络:Grownet

Gradient Boosting Neural Networks: GrowNet

论文作者

Badirli, Sarkhan, Liu, Xuanqing, Xing, Zhengming, Bhowmik, Avradeep, Doan, Khoa, Keerthi, Sathiya S.

论文摘要

提出了一个新颖的梯度增强框架,其中浅神经网络被用作``弱学习者''。在这个统一的框架下考虑了一般损失功能,并提供了用于分类,回归和学习排名的特定示例。合并了一个完全纠正的步骤,以纠正经典梯度提升决策树的贪婪函数近似的陷阱。所提出的模型在多个数据集上所有三个任务中的最新提升方法中呈现出优于最先进的结果。进行消融研究以阐明每个模型组件和模型超参数的效果。

A novel gradient boosting framework is proposed where shallow neural networks are employed as ``weak learners''. General loss functions are considered under this unified framework with specific examples presented for classification, regression, and learning to rank. A fully corrective step is incorporated to remedy the pitfall of greedy function approximation of classic gradient boosting decision tree. The proposed model rendered outperforming results against state-of-the-art boosting methods in all three tasks on multiple datasets. An ablation study is performed to shed light on the effect of each model components and model hyperparameters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源