(268a) First Steps Toward Developing an Image-Controlled Material Assembly | AIChE

(268a) First Steps Toward Developing an Image-Controlled Material Assembly

Authors 

Messina, D. - Presenter, Wayne State University
Durand, H., Wayne State University
In recent years, the rise of digital sensing modalities such as image sensors has opened opportunities for the development of advanced control laws utilizing this data. One such application is known as image-based predictive control, referring to the concept of using predictions of how a scene will look in the future to indicate predicted process states to be used by a controller to determine optimal inputs. Image-based predictive control has seen applications in the areas such as visual servoing [3] and autonomous vehicles [1], in which either mathematical transformations or artificial intelligence [2] are used to predict future states of the system. Image-based control has potential to be used in a variety of applications. One potential future application could be with respect to control of materials. Controlled material assemblies is a concept for next-generation materials, referring to the use of control to modify properties or the structure of an assembly. One might imagine images as a key sensing mechanism in control of materials.

This talk focuses on the development of a framework for creating a "mock-up" of an image-controlled material that can be used as a first step toward attempting to design a controlled material with desired properties that also responds to images. In general, the concept of developing a material that responds to images in a desired fashion requires a number of components, including: 1) identifying the material behavior desired (and then how to achieve it); 2) developing the controller's working specifications; and 3) developing the appropriate algorithm for image-based control/processing. We would like to break this problem into pieces, so that first the major functionality of the material is assessed and designed, so that it can then be considered how and whether a material meeting the resulting requirements can be designed.

The simulation will consider that we would like to design a material with the specification that it should break when certain visuals are created. We would like it to have this behavior progressively (i.e., to predict that it might see the problematic visual that would cause it to break and to start to prepare for this so that it does not have a large lag for breaking when the problematic visual is observed). To handle the importance of the image components in this simulation, we present the development of the control algorithm and image-handling methods in the computer graphics software Blender in which a mock-up material assembly made of cubes that are considered to be able to be ejected from the assembly toward a new position at a rate given by a first-order dynamic model is discussed. A camera is placed in the Blender scene to represent the image sensor. It is pointed at a different cube, which is the one where, when its vertices exhibit certain colors at a certain position, signifies the problematic condition that should cause the material to break. We will use an image prediction algorithm to predict where the vertices might end up based on their current positions as rendered by the camera in Blender, and different possible concepts for what the cube might look like that could result in different future visuals. The image prediction algorithm will build on our prior preliminary results using OpenGL [4] for coding different visuals of a cube. The goal of the image prediction algorithm is to modify the camera position to reveal the nature of the block visuals. We will discuss different potential formulations of the image prediction-based control algorithm (e.g., different objective functions) and their effect on the movement of the camera and what it sees. The predictions will be made using image transformations relate to those which would be used in OpenGL for image rendering, but coded in Blender's Python programming interface to avoid the render time. The vertices of the cube in the camera’s field of view will be captured using a render in Blender that will then be processed to provide the vertex information. The transformations of the vertices in the image prediction algorithm will be according to the predicted movement of the cube to yield predictions of what the images in the camera’s field of view will look like at discrete times in the future. In this example, each face of the cube is a different color, and the cube rotates continuously. We will demonstrate how the controller ejects cubes from the material for different algorithm formulations and cube visuals, to better understand the responses of the material and tuning of both its behavior and the control and image prediction algorithms for this application. This initial mock-up will then be used to suggest what types of properties might be desirable in a material that could have this type of responsiveness.

[1] Lee, Daewon, Hyon Lim, and H. Jin Kim. "Obstacle avoidance using image-based visual servoing integrated with nonlinear model predictive control." 2011 50th IEEE Conference on Decision and Control and European Control Conference. IEEE, 2011.

[2] Oprea, Sergiu, et al. "A review on deep learning techniques for video prediction." IEEE Transactions on Pattern Analysis and Machine Intelligence 44.6 (2020): 2806-2826.

[3] Sheng, Huaiyuan, Eric Shi, and Kunwu Zhang. "Image-based visual servoing of a quadrotor with improved visibility using model predictive control." 2019 IEEE 28th international symposium on industrial electronics (ISIE). IEEE, 2019.

[4] Oyama, Henrique, Dominic Messina, Renee O'Neill, Samantha Cherney, Minhazur Rahman, Keshav Kasturi Rangan, Govanni Gjonaj, and Helen Durand. "Test Methods for Image-Based Information in Next-Generation Manufacturing." IFAC-PapersOnLine 55, no. 7 (2022): 73-78.