<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Ren, Yinuo</style></author><author><style face="normal" font="default" size="100%">Feng Li</style></author><author><style face="normal" font="default" size="100%">Kang, Yanfei</style></author><author><style face="normal" font="default" size="100%">Jue Wang</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Infinite Forecast Combinations Based on Dirichlet Process</style></title><secondary-title><style face="normal" font="default" size="100%">2023 IEEE International Conference on Data Mining Workshops (ICDMW)</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Data models</style></keyword><keyword><style  face="normal" font="default" size="100%">Dirichlet process</style></keyword><keyword><style  face="normal" font="default" size="100%">Diversity reception</style></keyword><keyword><style  face="normal" font="default" size="100%">Ensemble learning</style></keyword><keyword><style  face="normal" font="default" size="100%">Forecast combinations</style></keyword><keyword><style  face="normal" font="default" size="100%">forecasting</style></keyword><keyword><style  face="normal" font="default" size="100%">Predictive models</style></keyword><keyword><style  face="normal" font="default" size="100%">Stochastic processes</style></keyword><keyword><style  face="normal" font="default" size="100%">Time series analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">Training</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2023</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://ieeexplore.ieee.org/abstract/document/10411613</style></url></web-urls></urls><pages><style face="normal" font="default" size="100%">579–587</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Forecast combination integrates information from various sources by consolidating multiple forecast results from the target time series. Instead of the need to select a single optimal forecasting model, this paper introduces a deep learning ensemble forecasting model based on the Dirichlet process. Initially, the learning rate is sampled with three basis distributions as hyperparameters to convert the infinite mixture into a finite one. All checkpoints are collected to establish a deep learning sub-model pool, and weight adjustment and diversity strategies are developed during the combination process. The main advantage of this method is its ability to generate the required base learners through a single training process, utilizing the decaying strategy to tackle the challenge posed by the stochastic nature of gradient descent in determining the optimal learning rate. To ensure the method’s generalizability and competitiveness, this paper conducts an empirical analysis using the weekly dataset from the M4 competition and explores sensitivity to the number of models to be combined. The results demonstrate that the ensemble model proposed offers substantial improvements in prediction accuracy and stability compared to a single benchmark model.</style></abstract></record></records></xml>