Puwasala Gamakumara
Puwasala Gamakumara
George Athanasopoulos
Puwasala Gamakumara
George Athanasopoulos
Rob Hyndman
Linear reconciliation generally takes the form ~b=(d+G^y) where
Linear reconciliation generally takes the form ~b=(d+G^y) where
Linear reconciliation generally takes the form ~b=(d+G^y) where
The full hierarchy is ~y=S~b where S is an n×m matrix that encodes constraints.
S=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝1111110000111000010000100001⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠
ν(s(B))=μ(B)∀B∈FRm and s(B) is the image of B under s(.).
Let (Rn,FRn,^ν) be a probability triple corresponding to a base forecast.
Let (Rn,FRn,^ν) be a probability triple corresponding to a base forecast.
The reconcilied forecast is characterised by
~ν(A)=^ν(ψ−1(A))∀A∈Fs and ψ−1(A) is the pre-image of A under ψ(.).
If ^y[1],…,^y[L] is a sample from some base probabilistic forecast, then ~y[1],…,~y[L] is a sample from the reconciled forecast where
~y[l]=ψ(^y[l])∀l=1,…,L
If ^y[1],…,^y[L] is a sample from some base probabilistic forecast, then ~y[1],…,~y[L] is a sample from the reconciled forecast where
~y[l]=ψ(^y[l])∀l=1,…,L Reconciling a sample from the base distribution gives a sample from the reconciled distribution.
If ^y[1],…,^y[L] is a sample from some base probabilistic forecast, then ~y[1],…,~y[L] is a sample from the reconciled forecast where
~y[l]=ψ(^y[l])∀l=1,…,L Reconciling a sample from the base distribution gives a sample from the reconciled distribution.
Proof in paper.
Ep[K(p,ω)]≤Ep[K(q,ω)]
for all p≠q where ω∼p
Use training data t=1,…,T to train models and make base forecasts.
Use training data t=1,…,T to train models and make base forecasts.
Then reconcile.
Optimise E(γ)=T+R−1∑t=TK(~fγt+h|t,yt+h)
Optimise E(γ)=T+R−1∑t=TK(~fγt+h|t,yt+h) where ~fγt+h|t is reconciled with respect to γ:=(d,vec(G))
Optimise E(γ)=T+R−1∑t=TK(~fγt+h|t,yt+h) where ~fγt+h|t is reconciled with respect to γ:=(d,vec(G))
E(γ)≈T+R−1∑t=T[1Q(Q∑q=1||~y[q]t+h|t−yt+h||−12||~y[q]t+h|t−~y∗[q]t+h|t||)]
E(γ)≈T+R−1∑t=T[1Q(Q∑q=1||~y[q]t+h|t−yt+h||−12||~y[q]t+h|t−~y∗[q]t+h|t||)] ~y[q]t+h|t=S(d+G^y[q]t+h|t), ~y∗[q]t+h|t=S(d+G^y∗[q]t+h|t) and ^y[q]t+h|t,^y∗[q]t+h|tiid∼^ft+h|t for q=1,…,Q
Simulate from 7-variable hierarchy with bottom levels given by ARIMA models
Simulate from 7-variable hierarchy with bottom levels given by ARIMA models
Simulate from 7-variable hierarchy with bottom levels given by ARIMA models
Simulate from 7-variable hierarchy with bottom levels given by ARIMA models
Simulate from 7-variable hierarchy with bottom levels given by ARIMA models
Obtain point forecasts from ARIMA or ETS models. To sample from base probabilistic forecasts add noise that is
Obtain point forecasts from ARIMA or ETS models. To sample from base probabilistic forecasts add noise that is
Obtain point forecasts from ARIMA or ETS models. To sample from base probabilistic forecasts add noise that is
Obtain point forecasts from ARIMA or ETS models. To sample from base probabilistic forecasts add noise that is
Obtain point forecasts from ARIMA or ETS models. To sample from base probabilistic forecasts add noise that is
⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝eTot,1⋯eTot,t⋯eTot,TeA,1⋯eA,t⋯eA,TeB,1⋯eB,t⋯eB,TeAA,1⋯eAA,t⋯eAA,TeAB,1⋯eAB,t⋯eAB,TeBA,1⋯eBA,t⋯eBA,TeBB,1⋯eBB,t⋯eBB,T⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠
⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝eTot,1⋯eTot,t⋯eTot,TeA,1⋯eA,t⋯eA,TeB,1⋯eB,t⋯eB,TeAA,1⋯eAA,t⋯eAA,TeAB,1⋯eAB,t⋯eAB,TeBA,1⋯eBA,t⋯eBA,TeBB,1⋯eBB,t⋯eBB,T⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠
⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝eTot,1⋯eTot,t⋯eTot,TeA,1⋯eA,t⋯eA,TeB,1⋯eB,t⋯eB,TeAA,1⋯eAA,t⋯eAA,TeAB,1⋯eAB,t⋯eAB,TeBA,1⋯eBA,t⋯eBA,TeBB,1⋯eBB,t⋯eBB,T⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠
Use a feed-forward neural network with up to 28 lags of daily data. For probabilisitic forecasts:
Use a feed-forward neural network with up to 28 lags of daily data. For probabilisitic forecasts:
Use a feed-forward neural network with up to 28 lags of daily data. For probabilisitic forecasts:
Use a feed-forward neural network with up to 28 lags of daily data. For probabilisitic forecasts:
Use a feed-forward neural network with up to 28 lags of daily data. For probabilisitic forecasts:
Use a feed-forward neural network with up to 28 lags of daily data. For probabilisitic forecasts:
Consider day-ahead forecasts.
Athanasopoulos, G. et al. (2017). "Forecasting with temporal hierarchies". In: European Journal of Operational Research 262.1, pp. 60-74.
Ben Taieb, S. et al. (2020). "Hierarchical Probabilistic Forecasting of Electricity Demand With Smart Meter Data". In: Journal of the American Statistical Association. in press.
Gneiting, T. et al. (2007). "Strictly Proper Scoring Rules, Prediction, and Estimation". In: Journal of the American Statistical Association 102.477, pp. 359-378.
Gross, C. W. et al. (1990). "Disaggregation methods to expedite product line forecasting". In: Journal of Forecasting 9.3, pp. 233-254.
Hyndman, R. J. et al. (2011). "Optimal combination forecasts for hierarchical time series". In: Computational Statistics and Data Analysis 55.9, pp. 2579-2589.
Jeon, J. et al. (2019). "Probabilistic forecast reconciliation with applications to wind power and electric load". In: European Journal of Operational Research 279.2, pp. 364-379.
Kingma, D. P. et al. (2014). "Adam: A method for stochastic optimization". . http://arxiv.org/abs/1412.6980.
Scheuerer, M. et al. (2015). "Variogram-Based Proper Scoring Rules for Probabilistic Forecasts of Multivariate Quantities". In: Monthly Weather Review 143.4, pp. 1321-1334.
Schwarzkopf, A. B. et al. (1988). "Top-down versus bottom-up forecasting strategies". In: International Journal of Production Research 26 (11), pp. 1833-1843.
Shang, H. L. et al. (2017). "Grouped Functional Time Series Forecasting: An Application to Age-Specific Mortality Rates". In: Journal of Computational and Graphical Statistics 26.2, pp. 330-343.
Wickramasuriya, S. L. et al. (2019). "Optimal Forecast Reconciliation for Hierarchical and Grouped Time Series Through Trace Minimization". In: Journal of the American Statistical Association 114.526, pp. 804-819.
Puwasala Gamakumara
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |