Data-Driven Multi-Stage Stochastic Optimization on Time Series | AIChE

Data-Driven Multi-Stage Stochastic Optimization on Time Series

Type

Conference Presentation

Conference Type

AIChE Annual Meeting

Presentation Date

November 9, 2021

Duration

19 minutes

Skill Level

Intermediate

PDHs

0.50

Multi-stage stochastic optimization provides a versatile modeling framework for sequential decision-making under uncertainty with diverse applications in process systems engineering, energy systems, finance, and management science [1-3]. Because even multi-stage stochastic linear programs can be very hard to solve to optimality, most existing approaches for their solution either resort to tractable sample-based approximations [1,3], or restrict the space of recourse decisions to derive amenable approximations [4]. Sampling-based approaches for multi-stage stochastic programs usually assume that the decision-maker can generate so-called conditional samples of the underlying stochastic process as needed [1]. However, in practice, decision-makers often only have access to a finite number of (historical) observations of the stochastic process in a stochastic programming model.

We study data-driven approaches for multi-stage stochastic linear programs assuming only access to a single trajectory of the underlying stochastic process. The goal is to choose a decision that minimizes the expected system cost conditioned on the current state of the stochastic process. Building on residuals-based data-driven approximations for two-stage conditional stochastic programs [5,6], we investigate approaches that integrate a time series prediction model within a sample-based approximation to the multi-stage stochastic program at hand. Our formulations are flexible and accommodate popular parametric and nonparametric time series models, including nonlinear vector autoregressive models, autoregressive models with exogenous variables, and multivariate GARCH models. Unlike previous approaches [7,8], we do not assume access to multiple independent historical sample paths of the underlying stochastic process but only assume access to a single trajectory. We derive conditions on the stochastic process, the time series model, and the stochastic program under which solutions to our data-driven sample-based approximations possess asymptotic and finite sample guarantees. We validate our theoretical results through numerical experiments on a portfolio optimization model and illustrate the potential benefits of our data-driven formulations over existing approaches even when the time series prediction model is misspecified. We also demonstrate how our data-driven approximation of the portfolio optimization model can be solved efficiently using stochastic dual dynamic programming.

1. Shapiro, Alexander, Darinka Dentcheva, and Andrzej Ruszczynski. Lectures on stochastic programming: modeling and theory. Society for Industrial and Applied Mathematics, 2014.

2. Grossmann, Ignacio E., Robert M. Apap, Bruno A. Calfa, Pablo Garcia-Herreros, and Qi Zhang. Recent advances in mathematical programming techniques for the optimization of process systems under uncertainty. Computers & Chemical Engineering 91 (2016): 3-14.

3. Pereira, Mario VF, and Leontina MVG Pinto. Multi-stage stochastic optimization applied to energy planning. Mathematical Programming 52.1 (1991): 359-375.

4. Georghiou, Angelos, Wolfram Wiesemann, and Daniel Kuhn. Generalized decision rule approximations for stochastic programming via liftings. Mathematical Programming 152.1 (2015): 301-338.

5. Kannan, Rohit, Güzin Bayraksan, and James R. Luedtke. Data-driven sample average approximation with covariate information. Available on Optimization Online (July 2020).

6. Kannan, Rohit, Güzin Bayraksan, and James R. Luedtke. Heteroscedasticity-aware residuals-based contextual stochastic optimization. arXiv preprint arXiv:2101.03139 (2021).

7. Bertsimas, Dimitris, and Christopher McCord. From predictions to prescriptions in multistage optimization problems. arXiv preprint arXiv:1904.11637 (2019).

8. Bertsimas, Dimitris, Christopher McCord, and Bradley Sturt. Dynamic optimization with side information. arXiv preprint arXiv:1907.07307 (2019).

Presenter(s) 

Once the content has been viewed and you have attested to it, you will be able to download and print a certificate for PDH credits. If you have already viewed this content, please click here to login.

Language 

Checkout

Checkout

Do you already own this?

Pricing

Individuals

AIChE Member Credits 0.5
AIChE Pro Members $19.00
AIChE Graduate Student Members Free
AIChE Undergraduate Student Members Free
AIChE Explorer Members $29.00
Non-Members $29.00