(496a) Strategic Capacity Decisions in Manufacturing Using Stochastic Dynamic Programming | AIChE

(496a) Strategic Capacity Decisions in Manufacturing Using Stochastic Dynamic Programming

Authors 

Pratikakis, N. - Presenter, Georgia Institute of Technology
Hegazy, T. A., Georgia Institute of Technology
Realff, M., Georgia Institute of Technology
Lee, J. H., Korea Advanced Institute of Science and Technology (KAIST)


Optimal resource planning and allocation under uncertainty is one of the fundamental challenges arising in industrial and chemical scheduling applications. In this work, we consider a multistage capacity planning and allocation problem that arises in a wide variety of manufacturing systems. More specifically, the manufacturing process is divided into three interdependent stages with physical queues to represent the inventory. There are two types of uncertainty in the system, the demand rate, and the failure rate at a given manufacturing stage, which causes recirculation of the material in the system. We use Markov chains to model the uncertainty associated with the demand rate and the product quality, and we formulate the problem as a Markov decision process (MDP). Then, we introduce an adaptive dynamic programming approach to construct a solution that improves upon any heuristic or random initial policy. The main contributions of this work are the development of a generic algorithmic framework that can be used for multistage optimization problems under uncertainty. The novelty of the proposed approach lies in using an adaptive action set along with a real-time dynamic programming algorithm. The approach evolves an initial heuristic policy towards an optimal policy based on perturbing the actions taken by the heuristic in a given state. The Bellman iteration equation is solved simultaneously with the action selection and thus lowers the storage requirement significantly compared to our previous approach Choi et al [1] [2] [3]. The approach has been demonstrated in the context of a prototypical manufacturing problem, where it improves over an a priori heuristic approach. The curse of dimensionality in complex systems translates into two major computational bottlenecks. The first one concerns the exponential number of states and the other involves the large number of actions that can be taken at each state. The proposed approach circumvents the curse of dimensionality associated with the state space by iterating only over the states the system encounters. It also alleviates the curse of dimensionality associated with the action space by considering an adaptive action set for each iteration. The approach is evaluated through several sets of simulation experiments. Experimental results show that the adaptive dynamic programming approach consistently improves over an existing parameterized policy by 7.5-17.5% in the three stage manufacturing system with markovian dynamics of the demand process and an intermediate production step that can experience a two-mode failure rate.

References

1. Jaein Choi, Matthew J. Realff, and Jay H. Lee. An algorithmic framework for improving heuristic solutions: Part I. a deterministic discount coupon traveling salesman problem. Computers & Chemical Engineering, Volume 28, Issue 8, 15 July 2004, Pages 1285-1296.

2.Jaein Choi, Jay H. Lee, and Matthew J. Realff. An algorithmic framework for improving heuristic solutions: Part II. A new version of the stochastic traveling salesman problem. Computers and Computers & Chemical Engineering, Volume 28, Issue 8, 15 July 2004, Pages 1297-1307.

3. Jaein Choi, Matthew J. Realff and Jay H. Lee. Dynamic programming in a heuristically confined state space: a stochastic resource-constrained project scheduling application Computers & Chemical Engineering, Volume 28, Issues 6-7, 15 June 2004, Pages 1039-1058