(599q) On the Application of the Cascade Optimization Algorithm in Distributed Computer Networks and Grids | AIChE

(599q) On the Application of the Cascade Optimization Algorithm in Distributed Computer Networks and Grids



On the application of the Cascade
Optimization Algorithm in distributed computer networks and grids

Franjo Ceceljaa , Antonis
Kokossisb, Du Dua,

aPRISE,
FEPS, Unverisity of Surrey, Guildford, Surrey, GU2 7UB, U.K.,
f.cecelja@surrey.ac.uk

bSchool
of Engineering, National Technical University of Athens, Zofrafou Campus, 9,
GR-15780, Athens, Greece, akokossis@mail.ntua.gr

  Abstract



The paper builds on the Cascade
Optimization Algorithm (COA) a recent approach with a strong potential to
tackle complex problems using modular structures (pools). Figure 1 illustrates
the pool structure, the coordination task, as well as flows of calculations in
the course of optimization. COA deploys Markov processes by means of peer pools
populated by clients propagating the Markov chains, and breaking down
(decomposing) peer-to-peer communication through a (purely) parallel task
(coordination task). Without any burden to the optimization, sophistication can
be added to this task customizing the selection of chains and upgrading
intermediate data into information and knowledge. Dotted lines in Figure 1
highlight exploitable information links with the coordination layer.

Figure 1

Using the
distributable components of the Cascade Optimization Algorithm the paper
presents evidence on the potential of the algorithm to couple with computer
grids. Comparisons are drawn with the standalone algorithm as well as Tabu
Search - an algorithm comparatively easy to implement on the grids - and does
report noticeable improvements in computational performance. The algorithm is
further tested on problems of industrial complexity as is the synthesis of
chemical reactor networks offering evidence on biocatalytic reactions of
several components and reaction mechanisms.

The paper further
scopes for improvements in the cascading stages, also means to include self-supervised
stages. As anticipated, the results show that the execution time to convergence
reduces with increasing the number of CPUs with faster CPUs contributing more
than slower ones. As for the self-supervised system, search directions have
been intentionally biased by putting different weights on different patterns of
intrinsic parameters for each solution according to on-time analytical results
of these parameters. Results show that the optimization search converges more
quickly by applying more parameters in the production rules in the model.

In conclusion, grid
implementation of stochastic optimisation algorithm generally improves the
performance in terms of execution time. Bringing in intelligent self-supervised
and knowledge-based optimization, it is possible to improve the optimization
performance, in particular convergence speed.  

Kokossis, A., P. Linke and S. Yang, The Cascade Optimization
Algorithm:  A New Distributed Approach for the Stochastic Optimization of
Engineering Applications, Ind. Eng. Chem. Res., 2011, 50 (9), pp
5266?5278