# A Novel Physics based Optimization Algorithm: Equilibrium Optimizer (EO)

1. Introduction

Equilibrium is a state in which opposing forces are balanced. Optimization and equilibrium models are used routinely in environmental engineering and science [1]. This allows solving different kinds of equilibrium problems, such as multi-objective optimization problems etc, the equilibrium optimizer of each particle solution with its concentration position acts as a search agent [2].The search agents randomly update their concentration with respect to best-so-far solutions, namely equilibrium candidates, to finally reach to the equilibrium state of optimal result. Research includes stability and approximation properties, as well as the design and evaluation of computationally efficient algorithms [3].  Hence we develops a novel algorithm named Equilibrium Optimizer (EO), inspired by physics-based dynamic source and sink models used to estimate equilibrium states and EO presented a generic mass balance equation for a control volume.

2. The Equilibrium Rule

When all the forces that act upon an object are balanced, then the object is said to be in a state of equilibrium. Balanced is the key word that is used to describe equilibrium situations. Thus, the net force is zero and the acceleration is 0 m/s/s [4]. Objects at equilibrium must have an acceleration of 0 m/s/s. In simply the equilibrium rule is defined as when two objects are at equilibrium the forces are to be balanced.

3. Inspiration of EO

The inspiration for the EO approach is a simple well-mixed dynamic mass balance on a control volume, in which a mass balance equation is used toestimate both dynamic andequilibrium states and describe the concentration of an on reactive constituent in a control volume as a function of its various source and sink mechanisms [5]. The mass balance equation provides the underlying physics for the conservation of mass entering, leaving, and generated in a control volume. The pattern of the EO algorithm includes high exploratory and exploitative search mechanisms to randomly change solutions. Particles with their concentrations are considered as search agents, equivalent to particles and positions in Particle Swarm Optimization. Balancing exploration and exploitation provide adaptive values for the controlling parameter and reduces the magnitude of movement for the particles [6].

The term equilibrium is defined as the system of when the forward and reverse reactions occur at equal rates. i. e, the  condition of  all influences acting cancel each other, so that a static or balanced situation results. In physics, equilibrium results from the cancellation of forces acting on an object [7]. In chemistry, it occurs when chemical reactions are proceeding in such a way that the amount of each substance in a system remains the same. Concentrations are randomly updated with respect to fit solutions called equilibrium candidates [8]. This random updating along with a properly defined Generation rate term improves EO’s exploratory behavior in initial iterations and exploitative search in the final iterations, aiding in local minima avoidance throughout the whole optimization process [9].

4. Types of Equilibrium

There are three types of equilibrium: stable, unstable, and neutral [10].

• Stable Equilibrium: A state of equilibrium of a body such that when the body is slightly displaced it tends to return to its original position.

Example: The pendulum hanging directly downward from its point of support.

• Unstable Equilibrium: A state of equilibrium of a body such that when the body is slightly displaced it departs further from the original position.

Example: The pendulum standing directly upward from its point of support

• Neutral Equilibrium: A state of equilibrium is independent of displacements from its original position.

Example: A marble on a flat horizontal surface.

5. Generic Mass Balance Equation for a Control Volume

A mass balance is only meaningful in terms of a specific region of space, which has boundaries across which the terms and are determined. This region is called the control volume. A first-order ordinary differential equation expressing the generic mass-balance equation, in

which the change in mass in time is equal to the amount of mass that enters the system plus the amount being generated inside minus the amount that leaves the system, is described as [11]:

E =V Cequ – VC + M                                                            (1)

Where C is the concentration inside the control volume E, E is the rate of change of mass in the control volume, V is the volumetric flow rate into and out of the control volume, Cequ  is the concentrartion at an equilibrium state in which there is no generation inside the control volume and M is the mass generation inside the control volume.

6. Numerical Implementation of EO

6.1. Initialization

EO uses the initial population to start the optimization process. The initial concentrations are constructed based on the number of particles and dimensions with uniform random initialization in the search space as follows [12]

is the initial concentration vector of the nth particle, and    denote the minimum and maximum values for the dimensions, is a random vector in the interval of [0, 1], and N is the number of particles as the population. Particles are evaluated for their fitness function and then are sorted to determine the equilibrium candidates.Concentrations is randomly updated with respect to fit solutions called equilibrium candidates.

6.2. The Equilibrium Pool

The equilibrium concentration, defined as one of the best-so-far solutions randomly selected from a pool, called the equilibrium pool [13]. Each particle in each iteration updates its concentration with random selection among candidates chosen with the same probability and the particles are nominated as equilibrium candidates and are used to construct a the equilibrium pool as

Eequ.pool= E equ 1, E equ 2, E equ 3, Eequ4, Eequ avg                                    (3)

The function of iteration Fn and thus decreases with the number of iterations is defined as

Where Iter and Max_iter represent the current and the maximum number of iterations, respectively.

In order to slowing down the search speed along with improving the exploration and exploitation ability of the algorithm is considers as:

Φ is assumed to be a random vector in the interval of [0, 1], b1is a constant value that controls exploration ability, sign (rand−0.5), effects on the direction of exploration and exploitation .rand is a random vector between 0 and1.

6.3. Generation rate

The generation rate is used to express as a function of time a

The generation rate, which mostly plays the role of an exploiter, or solution refiner, particularly with small steps, although it sometimes contributes as an explorer as well .Thus, the final set of generation rate equations are as follows:

GRCP is defined as the Generation rate Control Parameter, which includes the possibility of generation term’s contribution to the updating process. The probability of this contribution which specifies how many particles use generation term to update their states is determined by another term called Generation Probability (GP) [14].

The mechanism of this contribution is determined occurs at the level of each particle. A good balance between exploration and exploitation is achieved with GP = 0.5. Finally, the updating rule of EO will be as follows:

Where, Z is considered as unit.

6.4. Computational Complexity Evaluation

Computational complexity of an optimization algorithm is presented by a function relating the running time of the algorithm to the input size of problem. Complexity is dependent upon the number of particles (n), the number of dimensions (d), and the number of iterations (t), and (c) is the cost of function evaluation [15].

O (EO) =O (problem definition) +O (initialization) + O (t (function evaluations) + O(t (Memory saving))+O(t (Concentration Update))

Where O notation is used here as a common terminology

Therefore, the overall computational complexity is defined as:

O (EO) =O (1+nd+tcn+ tn+tnd)                                                   (12)

∼ =O (tnd + tcn)                                                                (13)

The concentration difference between a particle and the equilibrium state, which acts as a direct search mechanism. This term encourages particles to globally search the domain, acting as explorers. Thus, EO can be considered as an efficient algorithm.

8. Flowchart of EO

9. Pseudo code of EO

10. Applications of EO

There are various applications are implemented in the EO technique.They are

• Pressure vessel design [15].
• Welded beam design [16].
• Convex Optimization [17].
• Non – Linear Programming [18].
• Semi – algebraic Optimization [19].
• Stochastic Optimization  [20].

Reference

[1] Faramarzi A, Afshar M (2013) A novel hybrid cellular automata–linear programming approach for the optimal sizing of planar truss structures. Civil Engineering and Environmental Systems 31:209-228. doi: 10.1080/10286608.2013.820280

[2]. Mirjalili S, Mirjalili S, Lewis A (2014) Grey Wolf Optimizer. Advances in Engineering Software 69:46-61. doi: 10.1016/j.advengsoft.2013.12.007

[3]. Marsalkó T, Majoros I, Kennedy J (1995) Multi-arm star polyisobutylenes: 2. the effect of synthesis conditions on the structure of star PIBs. Macromolecular Symposia 95:39-56. Doi: 10.1002/masy.19950950106

[4]. Kirkpatrick S, Gelatt C, Vecchi M (1983) Optimization by Simulated Annealing. Science 220:671-680. doi: 10.1126/science.220.4598.671

[5]. Dorigo M, Birattari M, Stutzle T (2006) Ant Colony Optimization. IEEE Computational Intelligence Magazine 1:28-39. doi: 10.1109/ci-m.2006.248054

[6]. Kaveh A, Talatahari S (2010) A novel heuristic optimization method: charged system search. Acta Mechanica 213:267-289. doi: 10.1007/s00707-009-0270-4

[7]. Khabbazi A, Gargari E, Lucas C (2009) Imperialist competitive algorithm for minimum bit error rate beamforming. International Journal of Bio-Inspired Computation 1:125. doi: 10.1504/ijbic.2009.022781

[8]. Rao R, Savsani V, Vakharia D (2011) Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Computer-Aided Design 43:303-315. doi: 10.1016/j.cad.2010.12.015

[9]. Parouha R, Das K (2016) A memory based differential evolution algorithm for unconstrained optimization. Applied Soft Computing 38:501-517. doi: 10.1016/j.asoc.2015.10.022

[10]. Xin Yao, Yong Liu, Guangming Lin (1999) Evolutionary programming made faster. IEEE Transactions on Evolutionary Computation 3:82-102. doi: 10.1109/4235.771163

[11]. Crepinsek M, Mernik M, Liu S (2011) Analysis of exploration and exploitation in evolutionary algorithms by ancestry trees. International Journal of Innovative Computing and Applications 3:11. doi: 10.1504/ijica.2011.037947

[12]. Mohamed A, Suganthan P (2017) Real-parameter unconstrained optimization based on enhanced fitness-adaptive differential evolution algorithm with novel mutation. Soft Computing 22:3215-3235. doi: 10.1007/s00500-017-2777-2

[13]. Jeong S, Hasegawa S, Shimoyama K, Obayashi S (2009) Development and investigation of efficient GA/PSO-HYBRID algorithm applicable to real-world design optimization. IEEE Computational Intelligence Magazine 4:36-44. doi: 10.1109/mci.2009.933099

[14]. Aragón V, Esquivel S, Coello C (2010) A modified version of a T-Cell Algorithm for constrained optimization problems. International Journal for Numerical Methods in Engineering n/a-n/a. doi: 10.1002/nme.2904

[15]. Ocaña M, Asensi V, Montes Á et al. (2008) Autoregulation mechanism of human neutrophil apoptosis during bacterial infection☆. Molecular Immunology 45:2087-2096. doi: 10.1016/j.molimm.2007.10.013

[16]. Lemonge A, Barbosa H, Bernardino H (2015) Variants of an adaptive penalty scheme for steady-state genetic algorithms in engineering optimization. Engineering Computations 32:2182-2215. doi: 10.1108/ec-07-2014-0158

[17]. Coello C, Mezura Montes E (2002) Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Advanced Engineering Informatics 16:193-203. doi: 10.1016/s1474-0346(02)00011-3

[18]. Huang F, Wang L, He Q (2007) An effective co-evolutionary differential evolution for constrained optimization. Applied Mathematics and Computation 186:340-356. doi: 10.1016/j.amc.2006.07.105

[19]. Hedar A, Fukushima M (2006) Derivative-Free Filter Simulated Annealing Method for Constrained Continuous Global Optimization. Journal of Global Optimization 35:521-549. doi: 10.1007/s10898-005-3693-z

[20]. He S, Prempain E, Wu Q (2004) An improved particle swarm optimizer for mechanical design optimization problems. Engineering Optimization 36:585-605. doi: 10.1080/03052150410001704854

1. Jenisha says:

Nice

2. sandhyaarul says:

Nice

3. shines hani says:

The way of expressing the definitions in diagrams were too good and awesome

4. day by day your work improving via diagrams and flowchart…keep going…i am also waiting for your next algorithm explanation…

5. Barbi Moni says:

Awesome