Comparative assessment of differently randomized accelerated particle swarm optimization and squirrel search algorithms for selective harmonics elimination problem (2024)

In this section, the results of different sub-scenarios based on the variations in generation sizes, maximum iterations, and dimensions of the problem are presented and discussed through statistical analysis. The generation size of the search particles and the number of iterations impact the exploration and exploitation of the search space. Moreover, the generation size is vital while dealing with different dimensions of the problem, and the maximum number of iterations decides the count of the re-iteration of randomness along with the progress of algorithms. The details of the results and discussion are as follows:

9-level inverter scenario

First, a 9-level inverter scenario is discussed. In this case, the global best search particle contains four firing angles. Both algorithms are run 51 times to find the best firing angles and the lowest cost value under varying sub-scenarios of generation sizes and maximum number of iterations. The first sub-scenario has a generation size of 100 and the maximum iterations equal to 500, the second and third sub-scenarios have the same generation size, but the maximum iterations are 1000 and 2000, respectively. The fourth to sixth sub-scenarios have a generation size of 250 and maximum iterations are kept as 500, 1000, and 2000, respectively. The generation size of 500 and maximum iterations of 500, 1000, and 2000, respectively, constitute the seventh to ninth sub-scenarios. These sizes and iterations constitute nine different sub-scenarios of the 9-level inverter case. The tenth sub-scenario takes generation size of 2000 and maximum iterations equal to 5000. In each sub-scenario, both algorithms are run 51 times for each of the five different randomization techniques.

11-level inverter scenario

Secondly, optimization is done for the 11-level inverter-based problem. In this case, the global best search particle contains five firing angles. Both the algorithms are run 51 times to find out the best firing angles and the lowest cost value under ten different sub-scenarios that are already mentioned. In each sub-scenario, both algorithms are tested for five different randomization techniques.

13-level inverter scenario

Lastly, a 13-level inverter case is performed. In this case, the global best search particle contains six firing angles. APSO and SSA algorithms with five randomizations are run 51 times each to find out the optimized firing instants and the best objective value for ten different sub-scenarios that are already mentioned.

Statistical analysis

SPSS is used to analyze the processes on the basis of statistics21. Since a sufficient sample size of data is available to deploy the benefit of central limit theorem20 and more than two perspectives are to be compared, ANOVA test has been used as the standard to differentiate between the central points of the outcomes produced by the five randomizations in each sub-scenario of every case. In case of lower sample size, Kruskal Wallis test21 can be used. The results are presented in tabular as well as in pictorial manner. Moreover, the impact of generation sizes, maximum iterations, and problem dimensions are presented. In this discussion, exponential randomization is denoted by ER, normal by NR, Rayleigh by RR, uniform by UR and Weibull by WR.

Table 1 tabulates the minimum value of the objective function provided by the APSO algorithm for a five-dimensional problem for varying values of population and maximum number of iterations. The minimum value which is provided by RR is indicated in the table. Same scenario is presented for SSA via Table 2 and Fig.3. These results show that NR gives the overall least value for 11-level problem with SSA algorithm. In SPSS, the set significance value is 0.05 whereas the one obtained after the test for the sub-scenario of population of 2000 and 5000 maximum iterations with APSO is 0.00 and with SSA is 0.007. Hence for both algorithms, the datasets obtained with different randomizations have different means. To check where the significant difference lies, the post-hoc test of Tukey has been chosen, and the results for APSO and SSA algorithms are depicted in Figs.4 and 5, respectively.

Full size table
Full size table

Minimum objective values variation with randomness, population and iterations via SSA for 11-level case.

Full size image

Tukey test results for different randomness via SSA for 11-level case (10th sub-scenario).

Full size image

The results in Fig.4 show that UR and RR perform better than others for APSO under the tenth sub-scenario of 11-level case. Whereas Fig.5 shows that all perform the same under similar conditions except WR for SSA algorithm. The rest of the results are tabulated in the upcoming tables and figures.

Table 3 shows the impact of randomizations for varying circ*mstances with the APSO algorithm and also tabulate the minimum value for each case with the randomization that provided such value. Figure6 shows the ranks variation under different sub-scenarios via post-hoc test results on mean values basis for APSO. Similar stuff is tabulated in Table 4 and portrayed in Fig.7 for SSA algorithm. All the mentioned tables and figures show that different randomizations provide different minimum and mean values. RR performed best in the case of APSO as it provided the best minimum values and first-ranked average values. In the case of SSA algorithm, although the best average values are provided by multiple randomizations, but the best minimum values are mostly provided by NR. Different randomizations perform better with different algorithms. Moreover, it is not necessary that the randomization providing better mean results will give best extreme results.

Full size table

Post-hoc test results for APSO.

Full size image
Full size table

Post-hoc test results for SSA.

Full size image

Finally, by using the benefits of ample data size and central limit theorem20, both algorithms are compared via independent t-test regarding their mean objective values. In case of lower sample size, Mann–Whitney U test21 can be used. The significance criterion is set at 95%. The results regarding this test are tabulated below.

Tables 5, 6, 7, 8, and 9 show that the type of randomness impacts the independent t-test results. So, with certain type of randomness, algorithms may perform equally well. But, with some other randomness, one of those algorithms supersedes the other(s). Moreover, different algorithms may perform better under different randomness depending upon the logic behind the algorithm and its mathematical modeling. In these tables, where the significance value is less than the set value (0.05), the algorithms are proven to have different performances, and then the next column shows which algorithm gives a lesser mean value (better performance). The no-free lunch theorem also explains the different performance of algorithms when dealing with different problem statements22. The convergence patterns of the five different randomization-based APSO and SSA algorithms for an 11-level scenario with population and maximum iterations of 500 each are shown in Figs.8 and 9, respectively.

Full size table
Full size table
Full size table
Full size table
Full size table

Convergence behavior of APSO algorithm with different randomizations (11-level, population = 500).

Full size image

Convergence behavior of SSA algorithm with different randomizations (11-level, population = 500).

Full size image
Comparative assessment of differently randomized accelerated particle swarm optimization and squirrel search algorithms for selective harmonics elimination problem (2024)

References

Top Articles
Latest Posts
Article information

Author: Greg Kuvalis

Last Updated:

Views: 5925

Rating: 4.4 / 5 (75 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Greg Kuvalis

Birthday: 1996-12-20

Address: 53157 Trantow Inlet, Townemouth, FL 92564-0267

Phone: +68218650356656

Job: IT Representative

Hobby: Knitting, Amateur radio, Skiing, Running, Mountain biking, Slacklining, Electronics

Introduction: My name is Greg Kuvalis, I am a witty, spotless, beautiful, charming, delightful, thankful, beautiful person who loves writing and wants to share my knowledge and understanding with you.