(73b) Three Disruptive Technologies Coming Soon to Manufacturing and Process Control | AIChE

(73b) Three Disruptive Technologies Coming Soon to Manufacturing and Process Control

Authors 

Badgwell, T. - Presenter, Collaborative Systems Integration
Bartusiak, R. D., Collaborative Systems Integration
We are currently in the midst of a fourth industrial revolution (Industry 4.0 [1]), involving the large-scale automation of traditional manufacturing and industrial practices, made possible by recent developments in mathematical algorithms, computer hardware, and internet connectivity (Industrial Internet of Things (IIoT) [2]). While much of this work can be considered evolutionary in nature, in this presentation we highlight three emerging technologies that appear to be truly disruptive; that is, they are likely to have such a large impact that they will change the way theoreticians and practitioners think about and accomplish manufacturing and process control. These technologies are Economic Model Predictive Control (EMPC), Deep Reinforcement Learning (DRL), and Open Process Automation (OPA).

Economic MPC (EMPC) is a relatively new technology that combines economic optimization with Model Predictive Control [3], two functions that are traditionally implemented separately. While the theory was worked out a few years ago [4], EMPC applications have only begun to appear recently. Professor Jim Rawlings and co-workers, for example, presented a successful EMPC implementation for the Stanford University campus heating and cooling system [5]. Recent theoretical work has shown that scheduling problems, which are usually approached from the point of view of static optimization, can also be considered as a special case of closed-loop EMPC [6]. This unification of closed-loop scheduling, economic optimization, and dynamic control has shed new light on such problems as rescheduling in the face of disturbances, and provides academics with a completely new framework for viewing and analyzing scheduling problems. Practitioners now have, for the first time, the hope of combining three disparate levels in the traditional control hierarchy into a single, harmonious layer.

Reinforcement Learning (RL) is a Machine Learning (ML) technology in which a computer agent learns, through trial and error, the best way to accomplish a particular task [7]. Deep Learning (DL) is a technology in which neural networks with a large number of intermediate layers are used to model relationships [8]. Using DL to parametrize the policy and value function of an RL agent leads to Deep Reinforcement Learning (DRL) technology, which allows an agent to achieve superhuman performance for some tasks. In 2017, for example, a DRL agent named AlphaGo soundly defeated the reigning world champion Go player [9]. Applications of this technology to manufacturing and process control systems are currently under study [10]. It is likely that DRL will not replace currently successful control algorithms such as PID and MPC, but will rather takeover some of the mundane tasks that humans perform to manage automation and control systems. For example, it appears that a DRL agent can learn how to tune PID loops effectively [11]. Other possibilities include advising operators during transient and upset conditions, mitigating disturbances such as weather events, and detecting and mitigating unsafe operations [10].

The industrial automation marketplace, comprised of Distributed Control System (DCS), Programmable Logic Controller (PLC), and Supervisory Control and Data Acquisition (SCADA) technology offerings, will soon experience a historic, game-changing disruption with the emergence of Open Process Automation (OPA) technology. Manufacturers, whose innovations have been constrained for decades by the limitations of closed, proprietary systems, will soon experience the benefits of open, interoperable, resilient, secure-by-design automation systems, made possible by the development of the consensus-based Open Process Automation Standard (O-PAS) by the Open Process Automation Forum (OPAF) [12]. Once O-PAS certified automation systems become widespread, vendors will see the market for their products and services expand significantly as the visions of I4.0 and the IIoT are realized. Academics and technology developers will see more opportunities to test their solutions as it becomes easier to deploy them. Dr. Don Bartusiak, co-director of OPAF, summarizes their progress to date in a paper published recently in Control Engineering Practice [12].

References

[1] Y Liao, F Deschamps, EdFR Loures, LFP Ramos, “Past, present, and future of Industry 4.0 - a systematic literature review and research agenda proposal”, Intl J Production Research, 55 (12), 3609-3629, (2017).

[2] H Boyes, B Hallaq, J Cunningham, T Watson, “The industrial internet of things (IIoT): An analysis framework”, Computers in Industry, 101, 1–12, (2018).

[3] SJ Qin, TA Badgwell, “A survey of industrial model predictive control technology”, Control Engineering Practice 11 (7), 733-764, (2003).

[4] JB Rawlings, D Angeli, CN Bates, “Fundamentals of economic model predictive control”, 51st Conference on Decision and Control, 3851-3861, (2012).

[5] JB Rawlings, NR Patel, MJ Risbeck, CT Maravelias, MJ Wenzel, RD Turney, “Economic MPC and real-time decision making with application to large-scale HVAC energy systems”, Computers & Chemical Engineering, 114 (6), 89-98, (2018).

[6] MJ Risbeck, CT Maravelias, JB Rawlings, “Unification of Closed-Loop Scheduling and Control: State-space Formulations, Terminal Constraints, and Nominal Theoretical Properties”, Computers & Chemical Engineering, 129 (10), (2019).

[7] RS Sutton, AG Barto, “Reinforcement Learning – An Introduction”, The MIT Press, (2018).

[8] I Goodfellow, Y Bengio, A Courville, “Deep Learning”, The MIT Press, (2016).

[9] D Silver, A Huang, CJ Maddison, A Guez, L Sifre, G Van Den Driessche, J Schrittwieser, I. Antonoglou, V Panneershelvam, M Lanctot, “Mastering the game of Go with deep neural networks and tree search”, Nature 529, 484–489 (2017).

[10] J Shin, TA Badgwell, KH Liu, JH Lee, “Reinforcement Learning – Overview of recent progress and implications for process control”, Computers & Chemical Engineering, 127, 282-294 (2019).

[11] TA Badgwell, KH Liu, NA Subrahmanya, WD Liu, MH Kovalski, “Adaptive PID Controller Tuning via Deep Reinforcement Learning”, U.S. patent 1095073, granted February 9, 2021.

[12] RD Bartusiak, S Bitar, DL DeBari, BG Houk, D Stevens, B Fitzpatrick, P Sloan, “Open Process Automation: A Standards-Based, Open, Secure, Interoperable Process Control Architecture”, Control Engineering Practice 121, 105034, (2022).