Template-Type: ReDIF-Paper 1.0 Series: Tinbergen Institute Discussion Papers Creation-Date: 2020-08-27 Revision-Date: 2021-05-19 Number: 20-052/III Author-Name: Rutger Jan Lange Author-Workplace-Name: Erasmus School of Economics Title: Bellman filtering for state-space models Abstract: This article presents a filter for state-space models based on Bellman's dynamic programming principle applied to the mode estimator. The proposed Bellman filter generalises the Kalman filter including its extended and iterated versions, while remaining equally inexpensive computationally. The Bellman filter is also (unlike the Kalman filter) robust under heavy-tailed observation noise and applicable to a wider range of (nonlinear and non-Gaussian) models, involving e.g. count, intensity, duration, volatility and dependence. The Bellman-filtered states are shown to be convergent, in quadratic mean, towards a small region around the true state. (Hyper)parameters are estimated by numerically maximising a filter-implied log-likelihood decomposition, which is an alternative to the classic prediction-error decomposition for linear Gaussian models. Simulation studies reveal that the Bellman filter performs on par with (or even outperforms) state-of-the-art simulation-based techniques, e.g. particle filters and importance samplers, while requiring a fraction (e.g. 1%) of the computational cost, being straightforward to implement and offering full scalability to higher dimensional state spaces. Classification-JEL: C32, C53, C61 Keywords: dynamic programming, continuous sampling importance resampling, curse of dimensionality, implicit stochastic gradient descent, numerically accelerated importance sampling, Kalman filter, maximum a posteriori (MAP) estimate, particle filter, prediction-error decomposition, posterior mode, stochastic proximal point algorithm, Viterbi algorithm File-URL: https://papers.tinbergen.nl/20052.pdf File-Format: application/pdf File-Size: 1329272 bytes Handle: RePEc:tin:wpaper:20200052