(184d) Leveraging Open Source, Big Data and the Cloud for Chemical Process Control | AIChE

(184d) Leveraging Open Source, Big Data and the Cloud for Chemical Process Control

Authors 

Hartman, R., New York University
Traditionally process controls have been a highly specialized area of engineering, requiring not only expensive hardware but also experts to implement it effectively. Over the last decade advances in hardware, software and integration have made highly advanced process control methodologies available to researchers for relatively little upfront investment. Control and automation tasks that used to require tens of thousands of dollars of investment could now be accomplished with a hundred-dollar microcontroller and an internet connection. However, both industry and academia have been slow to adapt to these changes and embrace all the new functionality afforded to them. Recent developments in electronics and software have greatly reduced the cost and complexity of control systems, from less expensive sensors to easily being able to store petabyte levels of data in the cloud and look for complex multivariate trends automatically. By embracing modern trends in both open-source platforms and cloud computing researchers and organizations will be able to greatly enhance their level of control, data collection, agglomeration and analytical capabilities, depth of insight into their systems, and ultimately process safety and efficiency.

The primary capability of a control system is to be able to input data from sensors and act upon this data. On the sensor front, cost reductions in silicon manufacturing have resulted in a significant cost reduction for high-precision thermocouples, pressure transducers, level sensors, accelerometers, thermal cameras and other common components. Hardware to interpolate and process data from these sensors has also become not only much cheaper but also much more capable, robust and easy to implement. Consider for example a scenario where you need to collect temperature data from each tray of a distillation column. Before you would need to run endless wires, have a large cabinet with a thousand-dollar thermocouple reader, and constantly be concerned that a loose wire could cripple your entire system. Today you could accomplish the same task using a set of Arduino boards networked through Zigbee, each with a 31855 thermocouple reader chip. Now all you need to run a single low-voltage DC power cable, and the failure of a single node would no longer impact the entire system. It is also much easier to add redundancy, augment your system for future expansion, and remotely monitor your process, not to mention significant cost savings. Overall recent developments in hardware have significantly decreased the cost and complexity of control system installations.

Another aspect that has significantly decreased the cost and increased the capability of controller instillations is the development of single-board computers. Ranging from the Raspberry Pi to the TK1 Jetson, the possibilities offered by these boards are hard to have imagined even several years ago. On the lower end these computers have roughly the processing power of a top-of-the-line desktop from the late 90s, for $30 and the size of pack of chewing gum. On the upper end these boards can train convolutional neural networks and process data at TFLOP/s-levels (that is one billion floating-point arithmetic operations per second). This allows for a lot of the analytical functions of a system to be moved away from centralized cabinets and off to the edge, close to sensors and actuators. Additionally, this will enable much quicker troubleshooting and modification without having to shut down the entire system. Most importantly though, this opens up the possibility of control algorithms that were not previously possible at a laboratory scale. Instead of traditional PID loops, engineers can implement much more complicated non-linear schemes thanks to the extra processing power. The scaling down of powerful computers opens up many new doors for controls and automation.

The final recent development that allows for a paradigm shift in control systems design is big data and cloud computing. Over the last several years it has become very simple to stream large amounts of sensor and process data to an off-site server. It is not unfeasible to record a thousand sensor values once a second and store this data for years. In fact, storing 30 billion data points would only cost you about $4 in cloud services. This enables a really interesting a new ability, combing through this data and looking for non-linear trends between parts of the system that one would never intuitively think effect each other. For example, how does ambient humidity effect the temperature distribution in your columns due to condensation or how do slight frequency changes in the AC power your receive from the grid change the flowrate of your pumps. Discovering these trends will enable engineers to adapt their control algorithms for them, and reduce the overall number of process deviations.