(58q) An Enhanced Particle Swarm Optimization Employing Quasi-Random Numbers with Application to Efficient Removal of Pfas from Water | AIChE

(58q) An Enhanced Particle Swarm Optimization Employing Quasi-Random Numbers with Application to Efficient Removal of Pfas from Water

Authors 

Kannan, S. - Presenter, M. Baskin Ridge High School
Diwekar, U., Vishwamitra Research Institute /stochastic Rese
Particle Swarm Optimization (PSO) is a metaheuristic algorithm for optimization problems. PSO was first introduced in 1995 by James Kennedy andRussell Eberhart [1]. The algorithm is based on the concept of social behavior, where particles (potential solutions) move towards the optimal solution through interactions with other particles in the search space. PSO has been widely used in various fields,including engineering, science, and finance, due to its simplicity, robustness, and Efficiency. Despite its success, PSO suffers from several limitations. One of the main limitations is its slow convergence rate, which can be attributed to the premature convergence of the particles towards local optima. This issue can be addressed by introducing efficient improvement techniques in PSO..Several enhancement ideas have been proposed in the past to improve the convergence rate of the PSO algorithm, and they are listed below.

Firstly, the Inertia weight technique was suggested by Russell Eberhart and Ying Shi [2]. The inertia weight technique is a well-known approach for enhancing the convergence speed of PSO. The inertia weight is used to control the movement of particles in the search space. The idea is to balance between exploration and exploitation of the search space. The inertia weight is updated at each iteration based on a predefined formula, which controls the speed and direction of particle movement. Various formulas have been proposed for updating the inertia weight, such as linear, nonlinear, and adaptive. The choice of the inertia weight formula depends on the optimization problem and the PSO parameters.

Second, the concept of a mutation operator was proposed [3]. A mutation operator is a powerful tool for enhancing the diversity of the PSO population. The mutation operator randomly modifies the position of a particle to generate a new solution in the search space. This operation can prevent premature convergence by introducing new solutions that may lead to better solutions. The mutation operator can be applied at different stages of the PSO algorithm, such as before or after the velocity update.

Third, the Opposition-based Learning technique was suggested [4]. Opposition-based learning (OBL) is a technique that uses the opposite of the current best solution to generate new solutions. The idea behind OBL is that the opposite of the best solution may represent a good direction for exploration in the search space. OBL can improve the diversity and convergence speed of PSO by generating new solutions that are different from the current population.

Fourth, Hybridization with other metaheuristics has been proposed [5]. Hybridization with other metaheuristics is a common approach for improving the efficiency of PSO. The idea is to combine the strengths of different metaheuristics to overcome their weaknesses. For example, PSO can be combined with genetic algorithms (GA), simulated annealing (SA), or ant colony optimization (ACO). The hybridization approach can enhance the exploration and exploitation capabilities of PSO, leading to better solutions in less time.

Fifth, Dynamic parameter Tuning was presented [6]. The PSO parameters, such as the swarm size, maximum velocity, and acceleration coefficients, have a significant impact on the performance of the algorithm. Dynamic parameter tuning is a technique that adjusts the PSO parameters during the optimization process based on the search history. The idea is to adapt the PSO parameters to the problem characteristics and the search progress to improve the convergence speed and solution quality. In conclusion, efficient improvement techniques in PSO can enhance the convergence speed and solution quality of the algorithm.

The approaches discussed in this paper, including the inertia weight technique, mutation operator, opposition-based learning, hybridization with other metaheuristics, and dynamic parameter tuning, can be used individually or in combination to address the limitations of PSO. The choice of approach depends on the problem characteristics and the available computational resources. However, most of these approaches can provide problem dependent solution methods. In this paper, we proposed a new approach where we replace the random numbers used for this method by quasi-random numbers like Halton and Sobol by maintaining the k-dimensional uniformity of these quasi-random numbers. This not only provides a generalized approach to any kind of optimization problem, but this method can be used in conjunction with the earlier enhancement techniques like the inertia weight technique, mutation operator, opposition-based learning, hybridization with other metaheuristics, and dynamic parameter tuning.

In this research, two enhanced versions of PSO (one using Sobol random numbers and the other using Halton random numbers) were proposed with the intention of speeding up convergence of the standard PSO algorithm. To test the efficiency improvement of the two proposed enhancements of the standard PSO algorithm, the number of iterations taken to achieve the optimum of the well-known Cigar, Ellipsoid, and paraboloid functions, along with the number of iterations taken to obtain an optimal path for the famous Travelling Salesman Problem(TSP) were noted. Following this, improvement in terms of the optimum of the objective function and the number of iterations needed to reach the optimum, were calculated for both the PSO enhanced with Sobol random number samplings and the PSO enhanced with Halton random number samplings, with respect to the standard PSO which uses Monte Carlo random number samplings. All the results for each of the benchmark functions and TSP unanimously show efficiency improvement due to the use of Sobol and Halton Sequences, with Sobol sequences showing even more efficiency than Halton sequences. Additionally, we noted that the more decision variables an optimization problem has, the improvement due to Sobol and Halton sequences increases. Concluding, both the enhancements of the standard PSO presented in this research, one utilizing Sobol random numbers and the other utilizing Halton random numbers, consistently show efficiency improvement as well as a better optimum, meaning that they successfully have increased the speed of convergence of the standard PSO algorithm.

To show the effectiveness of our algorithm, we plan to solve an important real-world environmental problem. Per- and polyfluoroalkyl substances (PFAS) are a group of human-made chemicals that includes PFOA, PFOS, GenX, and many other chemicals. PFAS have been manufactured and used in a variety of industries around the globe, including in the United States, since the 1940s. Both chemicals are very persistent in the environment and in the human body – meaning they don't break down, and they can accumulate over time. Evidence shows that exposure to PFAS can lead to adverse human health effects. US Environmental Protection Agency (EPA) has established a drinking water advisory level of 70 parts per trillion for perfluorooctanic acid (PFOA) and Perfluorooctane sulfonic acid (PFOS), individually or combined. Stringent regulations require the development of treatment technologies that can control the migration of PFAS or clean up impacted sites cost-effectively. In this project, we focus on the de novo formulation of novel model adsorbents for the effective removal of PFAS. We use computer-aided molecular design (CAMD) methods based on our new algorithm for developing new environmentally benign adsorbents for PFAS removal.







References



  1. Kennedy and R. Eberhart, "Particle swarm optimization," Proceedings of ICNN'95 - International Conference on Neural Networks, Perth, WA, Australia, 1995, pp. 1942-1948 vol.4, doi: 10.1109/ICNN.1995.488968.
  2. C. Bansal, P. K. Singh, M. Saraswat, A. Verma, S. S. Jadon and A. Abraham, "Inertia Weight strategies in Particle Swarm Optimization," 2011 Third World Congress on Nature and Biologically Inspired Computing, Salamanca, Spain, 2011, pp. 633-640, doi: 10.1109/NaBIC.2011.6089659.
  3. Ning Li, Yuan-Qing Qin, De-Bao Sun and Tong Zou, "Particle swarm optimization with mutation operator," Proceedings of 2004 International Conference on Machine Learning and Cybernetics (IEEE Cat. No.04EX826), Shanghai, China, 2004, pp. 2251-2256 vol.4, doi: 10.1109/ICMLC.2004.1382174.
  4. Zhou, F. Li, J. H. Abawajy and C. Gao, "Improved PSO Algorithm Integrated With Opposition-Based Learning and Tentative Perception in Networked Data Centres," in IEEE Access, vol. 8, pp. 55872-55880, 2020, doi: 10.1109/ACCESS.2020.2981972.
  5. Zhou, F. Li, J. H. Abawajy and C. Gao, "Improved PSO Algorithm Integrated With Opposition-Based Learning and Tentative Perception in Networked Data Centres," in IEEE Access, vol. 8, pp. 55872-55880, 2020, doi: 10.1109/ACCESS.2020.2981972.
  6. Zhou, F. Li, J. H. Abawajy and C. Gao, "Improved PSO Algorithm Integrated With Opposition-Based Learning and Tentative Perception in Networked Data Centres," in IEEE Access, vol. 8, pp. 55872-55880, 2020, doi: 10.1109/ACCESS.2020.2981972.