(144j) Machine Learning-Based View Factor Modelling in Polydisperse Particle Beds Including Walls | AIChE

(144j) Machine Learning-Based View Factor Modelling in Polydisperse Particle Beds Including Walls

Authors 

Tausendschön, J. - Presenter, Graz University of Technology
Radl, S., Graz University of Technology

Introduction

Radiative heat transfer is the most dominating heat transfer mechanism above 700 [°C]. Such high temperature appear in a wide range of industrial processes: pebble bed reactors, laser sintering and high-temperature particle oxidation, or reduction processes [1]. If no participating fluid is involved, or the contribution of this fluid is neglectable, radiative heat transfer in these processes is typically modelled via surface-to-surface radiation. A prominent approach for surface-to-surface radiation modelling is to calculate the heat flux based on view factors. A view factor (in literature also named ‘configuration factor’) is the ratio of the radiation leaving an emitting surface and the radiation that is striking an absorbing surface. Industrial scale processes feature wide particle size distributions. Due to the large number of particles it is computationally infeasible to numerically integrate view factors based on first principles, or use Monte Carlo-based algorithms [2] for their calculation. Also, literature methods that aim on decreasing the computational effort [3] are still too demanding to be generally adopted for polydisperse systems. On top of that - except for a single study [4] - wall effects are typically neglected.

Methods

The usage of Machine Learning-methods especially Deep Neural Networks (DNNs) has significantly emerged in simulations science fields like Computational Fluid Dynamics (CFD) and DEM. Also modelling radiative heat transfer and view factors comes into focus [5,6]. In our previous work [7] we tackled the above described fundamental issues of wall effects and exuberant computational demand also by using DNNs. For monodisperse systems the created DNNs were able to accurately predict view factors among particles, as well as between particles and walls at comparably low computational cost.

To solve the remaining issue of polydispersity we create a dataset of view factors that considers a wide range of particle radii ratios, different total particle volume fractions, as well as different particle-species volume fractions. In a first step we directly apply our DNNs modelled for monodisperse interactions to polydisperse interactions and investigate the effects on the view factors distribution and input data. Subsequently, we investigate the effect of the following additional input features of DNNs for polydisperse packings: (i) variants of the size ratio between the interacting objects, (ii) information about the particle volume fraction, and (iii) statistics about the (size) species distribution. Feature selection is typically based on correlation coefficients like the Pearson and Spearman coefficient. For the usage of pretrained models in, e.g. DEM simulations, additional aspects like the computational calculation cost and independency on the system size are important. Therefore, the calculated correlation coefficients are set into relation to the other mentioned aspects and the most promising input markers are selected.

Beside DNNs other Machine Learning methods like (i) decision tree-based random forest regression [8], and (ii) gradient boosting (XGBoost ) machines for regression [9] look promising for view factor modelling. Therefore, our DNN-based models are compared to these other regressors. All three regression methods rely on the same selected input features and the same cost function. To achieve the best possible generalization for the DNNs, Random Forest regressor, and XGBoost-regressor, all model parameters are optimized by a randomized search approach [10]. The overall quality of prediction is measured by the mean squared error (MSE), which also represents the cost function of the regression. Since the is significantly more influenced by large view factors, and does not reflect the overall quality of the view factor prediction, the coefficient of determination () between the predictions and the targets is used as additional metric. Another quantitative measure is the coefficient of determination (spread(xu,xo)) of particle distances within an upper boundary xo (set to half the domain size plus the mean particle diameter), and a lower boundary xu(set to half the domain size minus the mean particle diameter).

Results and Conclusion

As expected the direct application of DNNs trained for monodisperse interactions does not yield satisfying results, since monodisperse and polydisperse view factor distributions show significant differences. However, it is shown that the addition of particle size information to monodisperse DNNs significantly improves the performance in polydisperse beds. These adjusted DNNs allow the same prediction quality for polydisperse systems as in monodisperse systems. The latter can be stated for particle-particle as well as particle-wall view factors, although particle-wall view factors show different view factor distribution than particle-particle interactions.

This different data structure also explains the twofold results when comparing between the three ML-methods. For particle-particle view factors the DNN yield the best results with an MSE of 5.299e-7, of 0.9806 and spread of 0.8056. The MSE of the Random Forest regressor and the XGBoost-regressor are 5.254e-7 and 4.160e-07, the -values are 0.9808 and 0.9848 and spread is 0.1055 and 0.2004, respectively. Thus, especially the DNN-based prediction of the view factor deep in the particle bed, quantified by , is significantly better than the other two regressors. For particle-wall view factors the Random Forest Regressor is clearly superior. The MSE is 1.308e-11, is 0.999 and spread is 0.999. The MSE of DNN-based regressor and XGBoost-regressor are 1.106e-7 and 3.139e-7, the -values are 0.9772 and 0.9353 and spread is 0.6027 and 0.2373.

In terms of overall performance, the DNNs show a higher general applicability, while for specialized tasks other Machine Learning methods show a better performance, e.g. Random Forest regression for particle-wall view factors. The DNNs also benefit from being easier to be included into simulation frameworks like CFD or DEM. Thus, a combination of two DNN-based models, one for particle-particle interactions and one for particle-wall interactions, is recommended for general heat radiation modelling in DEM simulations.

References

[1] M.F. Modest, Radiative Heat Transfer - Second Edition, Academic Press, 2003. https://doi.org/10.1017/CBO9781107415324.004.

[2] T. Walker, S.C. Xue, G.W. Barton, Numerical determination of radiative view factors using ray tracing, J. Heat Transfer. 132 (2010) 1–6. https://doi.org/10.1115/1.4000974.

[3] T. Forgber, S. Radl, A novel approach to calculate radiative thermal exchange in coupled particle simulations, Powder Technol. 323 (2018) 24–44. https://doi.org/10.1016/j.powtec.2017.09.014.

[4] E.F. Johnson, İ. Tarı, D. Baker, Radiative heat transfer in the discrete element method by distance based approximations of radiation distribution factors, Powder Technol. (2020). https://doi.org/10.1016/j.powtec.2020.11.050.

[5] H.H. Kang, M. Kaya, S. Hajimirza, A data driven artificial neural network model for predicting radiative properties of metallic packed beds, J. Quant. Spectrosc. Radiat. Transf. 226 (2019) 66–72. https://doi.org/10.1016/j.jqsrt.2019.01.013.

[6] H. Wu, N. Gui, X. Yang, J. Tu, S. Jiang, A matrix model of particle-scale radiative heat transfer in structured and randomly packed pebble bed, Int. J. Therm. Sci. 153 (2020) 106334. https://doi.org/10.1016/j.ijthermalsci.2020.106334.

[7] J. Tausendschön, S. Radl, Deep Neural Network-based heat radiation modelling between particles and between walls and particles, Submitted Paper, (2021).

[8] L. Breiman, Random Forests, Mach. Learn. 45 (2001) 5–32.

[9] T. Chen, C. Guestrin, XGBoost: A scalable tree boosting system, Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. 13-17-August-2016 (2016) 785–794. https://doi.org/10.1145/2939672.2939785.

[10] J. Bergstra, Y. Bengio, Random search for hyper-parameter optimization, J. Mach. Learn. Res. 13 (2012) 281–305.