0
点赞
收藏
分享

微信扫一扫

论文阅读 [TPAMI-2022] Average Top-k Aggregate Loss for Supervised Learning

论文阅读 [TPAMI-2022] Average Top-k Aggregate Loss for Supervised Learning

论文搜索(studyai.com)

搜索论文: Average Top-k Aggregate Loss for Supervised Learning

搜索论文: http://www.studyai.com/search/whole-site/?q=Average+Top-k+Aggregate+Loss+for+Supervised+Learning

关键字(Keywords)

Aggregates; Training; Training data; Supervised learning; Data models; Loss measurement; Task analysis; Aggregate loss; average top- k k k k loss; supervised learning; learning theory

机器学习; 机器视觉

监督学习; 图像分类; SVM

摘要(Abstract)

In this work, we introduce the average top- k k kk ( A T k \mathrm {AT}_k ATk AT k) loss, which is the average over the k k kk largest individual losses over a training data, as a new aggregate loss for supervised learning.

We show that the A T k \mathrm {AT}_k ATk AT k loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss.

Yet, the A T k \mathrm {AT}_k ATk AT k loss can better adapt to different data distributions because of the extra flexibility provided by the different choices of k k kk.

Furthermore, it remains a convex function over all individual losses and can be combined with different types of individual loss without significant increase in computation.

We then provide interpretations of the A T k \mathrm {AT}_k ATk AT k loss from the perspective of the modification of individual loss and robustness to training data distributions.

We further study the classification calibration of the A T k \mathrm {AT}_k ATk AT k loss and the error bounds of A T k \mathrm {AT}_k ATk AT k-SVM model.

We demonstrate the applicability of minimum average top- k k kk learning for supervised learning problems including binary/multi-class classification and regression, using experiments on both synthetic and real datasets…

作者(Authors)

[‘Siwei Lyu’, ‘Yanbo Fan’, ‘Yiming Ying’, ‘Bao-Gang Hu’]

举报

相关推荐

0 条评论