(73b) Three Disruptive Technologies Coming Soon to Manufacturing and Process Control
AIChE Spring Meeting and Global Congress on Process Safety
2023
2023 Spring Meeting and 19th Global Congress on Process Safety
Topical 16: Petrochemicals
Process Control and Optimization Developments II
Tuesday, March 14, 2023 - 10:15am to 10:45am
Economic MPC (EMPC) is a relatively new technology that combines economic optimization with Model Predictive Control [3], two functions that are traditionally implemented separately. While the theory was worked out a few years ago [4], EMPC applications have only begun to appear recently. Professor Jim Rawlings and co-workers, for example, presented a successful EMPC implementation for the Stanford University campus heating and cooling system [5]. Recent theoretical work has shown that scheduling problems, which are usually approached from the point of view of static optimization, can also be considered as a special case of closed-loop EMPC [6]. This unification of closed-loop scheduling, economic optimization, and dynamic control has shed new light on such problems as rescheduling in the face of disturbances, and provides academics with a completely new framework for viewing and analyzing scheduling problems. Practitioners now have, for the first time, the hope of combining three disparate levels in the traditional control hierarchy into a single, harmonious layer.
Reinforcement Learning (RL) is a Machine Learning (ML) technology in which a computer agent learns, through trial and error, the best way to accomplish a particular task [7]. Deep Learning (DL) is a technology in which neural networks with a large number of intermediate layers are used to model relationships [8]. Using DL to parametrize the policy and value function of an RL agent leads to Deep Reinforcement Learning (DRL) technology, which allows an agent to achieve superhuman performance for some tasks. In 2017, for example, a DRL agent named AlphaGo soundly defeated the reigning world champion Go player [9]. Applications of this technology to manufacturing and process control systems are currently under study [10]. It is likely that DRL will not replace currently successful control algorithms such as PID and MPC, but will rather takeover some of the mundane tasks that humans perform to manage automation and control systems. For example, it appears that a DRL agent can learn how to tune PID loops effectively [11]. Other possibilities include advising operators during transient and upset conditions, mitigating disturbances such as weather events, and detecting and mitigating unsafe operations [10].
The industrial automation marketplace, comprised of Distributed Control System (DCS), Programmable Logic Controller (PLC), and Supervisory Control and Data Acquisition (SCADA) technology offerings, will soon experience a historic, game-changing disruption with the emergence of Open Process Automation (OPA) technology. Manufacturers, whose innovations have been constrained for decades by the limitations of closed, proprietary systems, will soon experience the benefits of open, interoperable, resilient, secure-by-design automation systems, made possible by the development of the consensus-based Open Process Automation Standard (O-PAS) by the Open Process Automation Forum (OPAF) [12]. Once O-PAS certified automation systems become widespread, vendors will see the market for their products and services expand significantly as the visions of I4.0 and the IIoT are realized. Academics and technology developers will see more opportunities to test their solutions as it becomes easier to deploy them. Dr. Don Bartusiak, co-director of OPAF, summarizes their progress to date in a paper published recently in Control Engineering Practice [12].
References
[1] Y Liao, F Deschamps, EdFR Loures, LFP Ramos, âPast, present, and future of Industry 4.0 - a systematic literature review and research agenda proposalâ, Intl J Production Research, 55 (12), 3609-3629, (2017).
[2] H Boyes, B Hallaq, J Cunningham, T Watson, âThe industrial internet of things (IIoT): An analysis frameworkâ, Computers in Industry, 101, 1â12, (2018).
[3] SJ Qin, TA Badgwell, âA survey of industrial model predictive control technologyâ, Control Engineering Practice 11 (7), 733-764, (2003).
[4] JB Rawlings, D Angeli, CN Bates, âFundamentals of economic model predictive controlâ, 51st Conference on Decision and Control, 3851-3861, (2012).
[5] JB Rawlings, NR Patel, MJ Risbeck, CT Maravelias, MJ Wenzel, RD Turney, âEconomic MPC and real-time decision making with application to large-scale HVAC energy systemsâ, Computers & Chemical Engineering, 114 (6), 89-98, (2018).
[6] MJ Risbeck, CT Maravelias, JB Rawlings, âUnification of Closed-Loop Scheduling and Control: State-space Formulations, Terminal Constraints, and Nominal Theoretical Propertiesâ, Computers & Chemical Engineering, 129 (10), (2019).
[7] RS Sutton, AG Barto, âReinforcement Learning â An Introductionâ, The MIT Press, (2018).
[8] I Goodfellow, Y Bengio, A Courville, âDeep Learningâ, The MIT Press, (2016).
[9] D Silver, A Huang, CJ Maddison, A Guez, L Sifre, G Van Den Driessche, J Schrittwieser, I. Antonoglou, V Panneershelvam, M Lanctot, âMastering the game of Go with deep neural networks and tree searchâ, Nature 529, 484â489 (2017).
[10] J Shin, TA Badgwell, KH Liu, JH Lee, âReinforcement Learning â Overview of recent progress and implications for process controlâ, Computers & Chemical Engineering, 127, 282-294 (2019).
[11] TA Badgwell, KH Liu, NA Subrahmanya, WD Liu, MH Kovalski, âAdaptive PID Controller Tuning via Deep Reinforcement Learningâ, U.S. patent 1095073, granted February 9, 2021.
[12] RD Bartusiak, S Bitar, DL DeBari, BG Houk, D Stevens, B Fitzpatrick, P Sloan, âOpen Process Automation: A Standards-Based, Open, Secure, Interoperable Process Control Architectureâ, Control Engineering Practice 121, 105034, (2022).