1. Introduction
Particle swarm optimization (PSO) is a population based stochastic optimization technique developed by Dr.Eberhart and Dr.Kennedy in 1995,inspired by social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms(GA).Incomputational science,particle swarm optimization (PSO) [1] is a computational method thatoptimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle’s position and velocity. Each particle’s movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions. The book by Kennedy and Eberhart [2] describes many philosophical aspects of PSO and swarm intelligence. An extensive survey of PSO applications is made by Poli. Recently, a comprehensive review on theoretical and experimental works on PSO has been published by Bonyadi and Michalewicz.
2. Particle Swarm Optimization (PSO)
Particle Swarm Optimization characterized into the domain of Artificial Intelligence. The term ‘Artificial Intelligence’ or ‘Artificial Life ‘refers to the theory of simulating human behavior through computation. It involves designing such computer systems which are able to execute tasks which require human intelligence. For e.g., earlier only humans had the power to recognize the speech of a person. But now, speech recognition is a common feature of any digital device. This has become possible through artificial intelligence. Other examples of human intelligence may include decision making, language translation, and visual perception etc. There are various techniques which make it possible. These techniques to implement artificial intelligence into computers are popularly known as approaches of artificial intelligence’. This same behavior is also executed by a fish school. Thus Particle Swarm Optimization Technique is said to be inspired by a swarm of birds or a school of fish. This is the overall concept of what a particle swarm optimization is, and on what biological phenomena, its working is based upon. Each individual in the swarm has a position and velocity defined, the algorithm looks at each case to establish the best outcome using the current swarm, and then the whole swarm moves to the new relative location. The key points are as follows: Each particles are arranged in the separate manner.PSO is comprised of a collection of particles are moved around in the search space [3].
3. Background

Fig 1 shows the Swarm behavior, or swarming , is a collective behavior exhibited by entities, particularly animals, of similar size which aggregate together, perhaps milling about the same spot or perhaps moving en masse or migrating in some direction. It is a highly interdisciplinary topic. As a term, swarming is applied particularly to insects, but can also be applied to any other entity or animal that exhibits swarm behavior. The term flocking or murmuration can refer specifically to swarm behavior in birds, herding to refer to swarm behavior in tetrapods, and shoaling or schooling to refer to swarm behavior in fish. Phytoplankton also gathers in huge swarms called blooms, although these organisms are algae and is not self-propelled the way animals are. By extension, the term “swarm” is applied also to inanimate entities which exhibit parallel behaviors, as in a robot swarm, an earthquake swarm, or a swarm of stars [4]. Vg= Global velocity, Vp = Particle velocity, Vtotal = Total velocity.
4. Inspiration of PSO
Particle Swarm Optimization is inspired by the social foraging behavior of some animals such as flocking behavior of birds and the schooling behavior of fish.
4.1. Conceptual model of finding Best Visited Point

Fig 2 shows the Conceptual model of finding Best Visited Point. Early studies of swarm behavior employed mathematical models to simulate and understand the behavior. The simplest mathematical models of animal swarms generally represent individual animals as following three rules:
- Move in the same direction as their neighbors
- Remain close to their neighbors
- Avoid collisions with their neighbors
The Particle Swarm Optimization algorithm mimics the natural processes observed in bird flocks while searching for food. Global optimization techniques have a wide range of applications in engineering, science, and mathematics. They comprise a small subset of optimization techniques (or metaheuristics) which can attempt to optimize a given problem without any a priori knowledge. This is particularly important in the field of electromagnetic, where Maxwell’s equations describe the operation of devices such as antennas and microwave circuits. In general, Maxwell’s equations can be categorized as coupled vector partial differential equations, leading to very complicated function landscapes. From an optimization perspective, the resulting fitness functions describing antenna and microwave circuit performance are typically multimodal, non-differentiable, highly dimensional, non-convex, nonlinear, discontinuous, and ill-conditioned functions. Basically, optimization problems in electromagnetic fall within the hardest class of optimization problems to solve. To make matters worse, engineers usually have very little knowledge about the best place to start searching for good designs. Among the various algorithms in literature, my primary experience has been with the usage of nature-inspired optimization techniques, which attempt to mimic naturally-occurring processes observed by scientists. One such example is the Particle Swarm Optimization algorithm. This algorithm attempts to mimic a flock of birds searching for food in a given landscape, as illustrated in the figure to the right. Other techniques that I have used include Genetic Algorithms and Covariance Matrix Adaptation Evolution Strategies. My primary interest in these algorithms is their application in achieving excellent designs for a given application and also in their usage to give devices intelligence [6].
4.1.1. Steps for PSO
- Population Initialization
- Fitness function
- Velocity updating
- Position updating
- Termination criteria
4.1.1.1. Population Initialization

Fig 3: Initialization for PSO
Fig 3 shows the Initialization for PSO, Initialize a population of particles with random position and velocities in d dimensional problem space. Confine the search space by specifying the lower and upper limits of each decision variable. The populations of points are initialized with the velocity and position set to fall into the pre-specified or allowed range and satisfying the equality and inequality constraints [7]. X(t) are the optimized parameters Xid(t) is the position of ith particle with respect to the dth dimension; i.e. the value of the dth optimized parameter in the ith candidate solution.
4.1.1.2. Fitness function

Fig 4: Fitness function for PSO
Fig 4 shows the Fitness function for PSO, A fitness function is a particular type of objective function that is used to summarize, as a single figure of merit, how close a given design solution is to achieving the set aims. In particular, in the fields of genetic programming and genetic algorithms, each design solution is commonly represented as a string of numbers (referred to as a chromosome round of testing, or simulation, the idea is to delete the ‘n’ worst design solutions, and to breed ‘n’ new ones from the best design solutions. Each design solution, therefore, needs to be awarded a figure of merit, to indicate how close it came to meeting the overall specification, and this is generated by applying the fitness function to the test, or simulation, results obtained from that solution [8]. Rnz = Number of particles, R = Particles.
4.1.1.3. Velocity updating
In PSO, the velocity of each particle reflects the distance traveled by this particle at each iteration. It depends on the previous best position found by the particle itself (particle memory) and on the previous best solution found the whole population (swarm memory) [9].
4.1.1.4. Position updating

Fig 5: Velocity updating and position updating for PSO
Fig 5 shows the velocity updating and position updating for PSO, Where
Xik à Position of ith particle with respect to the kth dimension
Xik+1 à Position of ith particle with respect to the k+1 dimension
Xik-1 à Position of ith particle with respect to the k-1 dimension
Vik+1 à Velocity of ith particle with respect to the k+1 dimension
Gik à Global Position of ith particle with respect to the kth dimension
Pbestik à Best Position of ith particle with respect to the kth dimension
Gbestik à Global Best Position of ith particle with respect to the kth dimension
Check all the imposed constraints to ensure the feasibility of all the potential solutions. If any element of individual violates its inequality constraints then the position of the individual is fixed to its maximum/minimum operating point. Update the archive which stores non dominated solution.
Particle positions are real vectors representing at each time step possible solutions to the problem you are trying to optimize. On the other hand, particle velocities are real vectors representing how these candidate solutions changes in a step as the algorithm tries to find better optima during its search. In canonical PSO, velocities are updated based on the differences between particle current position and its best historical position and the best historical position find by all the swarm. Think of it as if the particle moves while being attracted by two centers of gravity denoted by its best historical position and the global best [10].
4.1.1.5. Termination
Stopping criteria are needed to terminate the execution of optimization algorithms. In contrast to using a maximum number of function evaluations as a stopping condition, other criteria have the advantage of reacting adaptively to the state of the optimization run, thus function evaluations can be saved. Unfortunately, it seems to be impossible to define a stopping criterion without introducing one or more parameters. The parameter settings generally depend on the given optimization problem. However, it should be investigated if there are stopping criteria for which the parameter settings are robust to changes or if parameters can be set depending on certain aspects of the problem [11].
4.1.2. Flow Chart

Fig 6: Flowchart
4.1.3. Numerical example for PSO Algorithm
Find the maximum of the function
F(x) =x2a-xb+c with -10≤x≤10 using the PSO algorithm. Use 9 particles with the initial position x1=-9, x2=-6, x3=-4, x4=-1, x5=0.6, x6=3, x7=3.8, x8=7, x9=10
F(x) =x2-5x=10
Step1: Choose the number of particles x1=-9, x2=-6, x3=-4, x4=-1, x5=0.6, x6=3, x7=3.8, x8=7, x9=10
The initial population (i.e., iteration number t=0 can be represented as xi, i=1, 2, 3, 4, 5, 6, 7, 8, 9
x10=-9, x20=-6, x30=-4, x40=-1, x50=0.6, x60=3, x70=3.8, x80=7, x90=10
Evaluate the objective function values are
f10=x2-5x +10= (-9)2-5*(-9) +10=81-45+10=46
f20=x2-5x +10= (-6)2-5*(-6) +10=36-30+10=16
f30=x2-5x +10= (-4)2-5*(-4) +10=16-20+10=6
f40=x2-5x +10= (-1)2-5*(-1) +10=1-5+10=6
f50=x2-5x +10= (0.6)2-5*(0.6) +10=0.36-3+10=12.4
f60=x2-5x +10= (3)2-5*(3) +10=9-15+10=4
f70=x2-5x +10= (3.8)2-5*(3.8) +10=14.44-19+10=5.44
f80=x2-5x +10= (7)2-5*(7) +10=49-35+10=24
f90=x2-5x +10= (10)2-5*(10) +10=100-50+10=60
Let c1=c2=1
Set the initial velocities of each particle to zero
Step2: Set the iteration number as t=0+1 and go to step3
Step3: Findthe personalbest for each particle by,
P1best,1=-9, P1best,2=-6, P1best,3=-4, P1best,4=-1, P1best,5=0.6, P1best,6=3, P1best,7=3.8, P1best,8=7, P1best,9=10
Step4: Find the Global best by,
Gbest=min {ptbest, t} Where i=1——–9
Since the minimum personal best is P1best, 6 =3
Step5: Considering the Random Numbers in the range(0,1) as r11=0.213 and r21=0.876 and find the velocities of the particles by,
V1=0+0.213(9-9) +0.876(3+9) =10.512
V2=0+0.213(6-6) +0.876(3+6) =7.884
V3=0+0.213(2.6-2.6) +0.876(3+2.6) =4.9056
V4=0+0.213(-1+1) +0.876(3-1) =1.752
V5=0+0.213(0.6-0.6) +0.876(3-0.6) =2.1024
V6=0+0.213(3-3) +0.876(3-3) =0
V7=0+0.213(3.8-3.8) +0.876(3-3.8) =0.7008
V8=0+0.213(7-7) +0.876(3-7) =-3.504
V9=0+0.213(10-10) +0.876(3-10) =-6.132
Step6: Find the new values of xi1,i=1——9 by
xi1+1= x it+vit+1
x 11=9+10.512=19.512
x 21=6+7.884=12.884
x 31=2.6+4.9056=7.5056
x 41=-1+1.752=0.752
x 51=0.6+2.1024=2.7024
x 61=3+0=3
x 71=3.8+0.7008=4.5008
x 81=7-3.504=4.504
x 91=10-6.132=4.132
Step7: Stopping criteria
5. Applications of PSO
- Detection and diagnosis of faults and recovery from them[12]
- Design or optimization of engineers and electrical motors[13]
- Applications in metallurgy[14]
- Security and military applications[15]
- Vehicle routing problems[16]
- Signature verification[17]
- Fuzzy neural networks[18]

Fig 7: Applications of PSO Algorithm
6. Advantages of PSO
1) PSO algorithm is a derivative-free algorithm.
2) It is easy to implementation, so it can be applied both in scientific research
and engineering problems.
3) It has a limited number of parameters and the impact of parameters to the
Solution is small compared to other optimization techniques.
4) The calculation in PSO algorithm is very simple.
5) There are some techniques which ensure convergence and the optimum
Value of the problem calculates easily within a short time [19].
6) PSO is less dependent of a set of initial points than other optimization
techniques.
7) It is conceptually very simple [20].
Reference
[1]A. Gandomi and A. Alavi, “Krill herd: A new bio-inspired optimization algorithm”, Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 12, pp. 4831-4845, 2012.
[2]A. Mehrabian and C. Lucas, “A novel numerical optimization algorithm inspired from weed colonization”, Ecological Informatics, vol. 1, no. 4, pp. 355-366, 2006.
[3]N. LI, A. OUYANG and K. LI, “Improved particle swarm optimization for constrained optimization functions”, Journal of Computer Applications, vol. 32, no. 12, pp. 3319-3321, 2013.
[4]S. Bandichode, “Modality of Particle Swarm Optimization to Improve Decision Making Process of Analytical Hierarchy Processing *”, HELIX, vol. 8, no. 5, pp. 3888-3891, 2018.
[5] H. GU and L. XU, “Perceptive particle swarm optimization algorithm for constrained optimization problems”, Journal of Computer Applications, vol. 31, no. 1, pp. 85-88, 2011.
[6]H. GU and L. XU, “Perceptive particle swarm optimization algorithm for constrained optimization problems”, Journal of Computer Applications, vol. 31, no. 1, pp. 85-88, 2011.
[7]L. Zhang, Y. Chen, R. Sun and B. Yang, “A Task Scheduling Algorithm Based on PSO for Grid Computing”, International Journal of Computational Intelligence Research, vol. 4, no. 1, 2008.
[8]M. MAITRA and A. CHATTERJEE, “A hybrid cooperative–comprehensive learning based PSO algorithm for image segmentation using multilevel thresholding”, Expert Systems with Applications, vol. 34, no. 2, pp. 1341-1350, 2008.
[9]Y. CAO, Z. ZHANG and X. HUANG, “Improved binary particle swarm optimization algorithm with experience factor”, Journal of Computer Applications, vol. 33, no. 2, pp. 311-315, 2013.
[10]J. Wu, C. Zhang and N. Cui, “PSO algorithm-based parameter optimization for HEV power train and its control strategy”, International Journal of Automotive Technology, vol. 9, no. 1, pp. 53-59, 2008.
[11]Z. Bashir and M. El-Hawary, “Applying Wavelets to Short-Term Load Forecasting Using PSO-Based Neural Networks”, IEEE Transactions on Power Systems, vol. 24, no. 1, pp. 20-27, 2009.
[12]S. Panda, B. Mohanty and P. Hota, “Hybrid BFOA–PSO algorithm for automatic generation control of linear and nonlinear interconnected power systems”, Applied Soft Computing, vol. 13, no. 12, pp. 4718-4730, 2013.
[13]M. Sedghi, M. Aliakbar-Golkar and M. Haghifam, “Distribution network expansion considering distributed generation and storage units using modified PSO algorithm”, International Journal of Electrical Power & Energy Systems, vol. 52, pp. 221-230, 2013.
[14]G. ZHANG, “Structural Collaborative Optimization Based on Hybrid GA-PSO Algorithm with Correlated Variables”, Journal of Mechanical Engineering, vol. 48, no. 15, p. 113, 2012.
[15]L. ZHANG and Q. YAN, “Research of immune particle swarm optimization algorithm based on Gaussian distribution and simulated annealing algorithm”, Journal of Computer Applications, vol. 28, no. 9, pp. 2392-2394, 2009.
[16]E. MASEHIAN and D. SEDIGHIZADEH, “Multi-Objective PSO- and NPSO-based Algorithms for Robot Path Planning”, Advances in Electrical and Computer Engineering, vol. 10, no. 4, pp. 69-76, 2010.
[17]M. Shourian, S. Mousavi and A. Tahershamsi, “Basin-wide Water Resources Planning by Integrating PSO Algorithm and MODSIM”, Water Resources Management, vol. 22, no. 10, pp. 1347-1366, 2007.
[18]N. Hamta, S. Fatemi Ghomi, F. Jolai and M. Akbarpour Shirazi, “A hybrid PSO algorithm for a multi-objective assembly line balancing problem with flexible operation times, sequence-dependent setup times and learning effect”, International Journal of Production Economics, vol. 141, no. 1, pp. 99-111, 2013.
[19]W. Deng, H. Zhao, X. Yang, J. Xiong, M. Sun and B. Li, “Study on an improved adaptive PSO algorithm for solving multi-objective gate assignment”, Applied Soft Computing, vol. 59, pp. 288-302, 2017.
[20]R. Hooshmand and S. Soltani, “Fuzzy Optimal Phase Balancing of Radial and Meshed Distribution Networks Using BF-PSO Algorithm”, IEEE Transactions on Power Systems, vol. 27, no. 1, pp. 47-57, 2012.
Good explanation and creative presentation. Plz post Gravitational search algorithm….
good work