(411g) Integrated Optimization of Design, Storage Sizing, and Maintenance Policy As a Markov Decision Process Considering Varying Failure Rates | AIChE

(411g) Integrated Optimization of Design, Storage Sizing, and Maintenance Policy As a Markov Decision Process Considering Varying Failure Rates

Authors 

Ye, Y. - Presenter, The Dow Chemical Company
Grossmann, I., Carnegie Mellon University
Ramaswamy, S., Praxair, Inc.
Pinto, J. M., Linde plc
Process reliability is important to chemical plants, as it directly impacts the availability of the end products, and thus service level and profitability. In particular, what motivated this work is the reliability of air separation units that supply gas products to designated customers through pipelines, which is at an even higher stake since the interruption of the pipeline supply results in production stoppage at the customer site. Currently, discrete event simulation tools are used to evaluate the reliability/availability of selected redundancy levels to simulate the behavior of every asset in a plant using historical maintenance data and statistical models (Sharda and Bury, 2008). However, this approach cannot systematically consider all possible design alternatives, not to mention other strategies to increase availability as it would be the case in an optimization approach.

A number of optimization works have been reported to optimize the reliability of chemical plants. Pistikopoulos et al. (2001) and Goel et al. (2003b) formulate an MILP model for the selection of units with different reliability and the corresponding production and maintenance planning for a fixed system configuration. Terrazas-Moreno et al. (2010) formulate an MILP model using Markov chains to optimize the expected stochastic flexibility of an integrated production site by the selection of pre-specified alternative plants and the design of intermediate storage. Kim (2017) presents a reliability model for k-out-of-n systems without repair using a structured continuous-time Markov Chain, which is solved with a parallel genetic algorithm. As an improvement, our recent mixed-integer framework (Ye et al., 2019) models the stochastic failure-repair process of the superstructure of the ASU process as a continuous-time Markov Chain, and simultaneously optimizes the redundancy selection and the maintenance policy.

This work extends the idea of our aforementioned recent work (Ye et al., 2019) to incorporate liquid storage as another strategy for increasing reliability besides considering redundant equipment and condition-based maintenance. To be specific, three strategies over design and maintenance are considered to increase the availability of the system. For design, we can install parallel units for certain processing stages, such that when the primary units fail, other units can fill in its place in order to reduce system downtime. Another design strategy for reliability is to store liquid products which can be vaporized to meet pipeline demands during plant downtimes. Furthermore, we propose to capture equipment failure rate (bathtub deterioration/failure) and condition-based maintenance with a discrete-time Markov Decision Process. The bathtub curve is discretized into three ”working states” of the unit with different failure rates: 1. Infant, 2. Stable, and 3. Worn-out. When a unit is not working, there are three other possible states: 4. Stand-by, 5. Stopped, and 6. Failed. One and only one action is to be assigned to each state, which is the main decision to make in terms of the maintenance policy.

We embed the optimal condition of Markov Decision Processes and the stationary probability distribution conditions of the reduced Markov Chain into an MINLP (DMP) model that considers the economic trade-off among all major decisions. In order to make the model solvable, we propose a standard linearization for the bilinear terms of binary variables and continuous variables, and a reformulation of the objective function that potentially provides a stronger relaxation of the objective. An example based on the reliable design of a real-world air separation unit is used to demonstrate how to extract the model parameters from the raw data. We attempted to solve the MINLP (DMP) directly with several global solvers and found that they would not solve in reasonable amount of time. Therefore, we propose an algorithm that consists of two phases, Enumeration and Bounding, and Rewards Iteration. The validity of the bounding is based on the earlier introduced reformulation of the objective function. Resolving the example shows that the two-phase algorithm greatly reduces the required computational effort. The algorithm also has consistent performance over 20 randomly generated problems based on the original example of 4 processing stages. Another group of 20 random problems of 6 processing stages are also solved and show good computational results.