Optimization by gradient boosting - Laboratoire de Probabilités et Modèles Aléatoires Accéder directement au contenu
Article Dans Une Revue Advances in Contemporary Statistics and Econometrics: Festschrift in Honor of Christine Thomas-Agnan, ed. Daouia, A. and Ruiz-Gazen, A. Année : 2021

Optimization by gradient boosting

Résumé

Gradient boosting is a state-of-the-art prediction technique that sequentially produces a model in the form of linear combinations of simple predictors---typically decision trees---by solving an infinite-dimensional convex optimization problem. We provide in the present paper a thorough analysis of two widespread versions of gradient boosting, and introduce a general framework for studying these algorithms from the point of view of functional optimization. We prove their convergence as the number of iterations tends to infinity and highlight the importance of having a strongly convex risk functional to minimize. We also present a reasonable statistical context ensuring consistency properties of the boosting predictors as the sample size grows. In our approach, the optimization procedures are run forever (that is, without resorting to an early stopping strategy), and statistical regularization is basically achieved via an appropriate $L^2$ penalization of the loss and strong convexity arguments.
Fichier principal
Vignette du fichier
biau-cadre.pdf (270.51 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01562618 , version 1 (16-07-2017)

Identifiants

Citer

Gérard Biau, Benoît Cadre. Optimization by gradient boosting. Advances in Contemporary Statistics and Econometrics: Festschrift in Honor of Christine Thomas-Agnan, ed. Daouia, A. and Ruiz-Gazen, A., 2021, pp.23-44. ⟨hal-01562618⟩
478 Consultations
1516 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More