An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (2024)

1. Introduction

Feedforward neural networks (FNNs) are fundamental brain networks and formidable mathematical constructs regulating nonlinear regression and categorization challenges. FNNs exhibit outstanding simultaneous processing, nonlinear universal repercussions, flexible adaptability, a simplistic network layout, substantial assessment precision, collaborative storage tolerating errors, fantastic self-regulation, and self-organization. Their overarching objective is integrating external signals and pre-established networks to renew weights and thresholds and recognize the most appropriate solution. FNNs have been extensively trained using various tactics, such as triangulation topology aggregation optimization (TTAO) [1], the propagation search algorithm (PSA) [2], subtraction average-based optimization (SABO) [3], snow ablation optimization (SAO) [4], the waterwheel plant algorithm (WWPA) [5], and Young’s double-slit experiment (YDSE) [6].

Gölcük et al. assembled an augmented arithmetic optimization technology to instruct FNNs, and this tactic showcased attractive resilience and long-term stability to yield an extensive computational solution and lessen the transmission error [7]. Bilski et al. adopted a fast computational method to instruct FNNs. This tactic exhibited tremendous superiority and adaptability to acquire greater evaluation predictability and superior categorization precision [8]. Weber et al. deployed a physically enhanced training method to instruct FNNs. This tactic demonstrated excellent sustainability and practicality to maintain less dissemination errors and a superior deviation threshold [9]. Konar et al. crafted a noise-robust image classification to instruct FNNs, and this tactic exhibited considerable adaptability and self-management in acquiring the most suitable identification variables [10]. Elansari et al. crafted a genetic method to instruct radial FNNs, and this tactic manifested fantastic generalization and comprehension to transmit a higher-quality estimated solution [11]. Chai et al. described a genetic method with a mutation strategy to instruct FNNs, and this tactic received sufficient engagement and sustainability to inhibit preordained convergence and to achieve the most suitable precision [12]. Pan et al. deployed an integrated bat technology with the tabu search technique to instruct FNNs, and this tactic depicted abundant functionality and practicality to cultivate a better integration solution [13]. Lai et al. pursued metaheuristic algorithms to instruct FNNs. This tactic established considerable adaptability and competitiveness to recognize the appropriate computational component solution and upgraded the categorization integrity [14]. Bemporad et al. pursued a sequential least-squares and alternating direction approach to instruct FNNs. This tactic demonstrated instructive competitiveness and functionality in attaining superior connection weights and deviation thresholds [15]. Wang et al. explored an ameliorated back-propagation technology to instruct FNNs, and this tactic showed excellent sustainability and practicality to accomplish the desirable connection weight and deviation threshold [16]. Bülbül et al. implemented a genetic algorithm to instruct FNNs. This tactic exhibited extremely high sustainability and accessibility in delivering the most desirable operational solution [17]. Javanshir et al. attempted metaheuristic approaches to instruct FNNs. These tactics required a consolidated network layout to mitigate the transmission error and maximize the attraction precision [18]. Schreuder et al. implemented Bayesian hyper-heuristics to instruct FNNs. This tactic depicted sustainability and accessibility in maintaining assessment errors and accurate predictions [19]. Özden et al. presented a CCOT optimization approach to instruct FFNs, and this tactic highlighted remarkable adaptability and productivity to accomplish high classification precision and greater collaboration efficiency [20].

Maddaiah et al. integrated a modified cuckoo search method to instruct FNNs, and this tactic exhibited outstanding superiority and consistency in generating the lowest transmission deviation [21]. Arthur et al. integrated a recursive least-squares approach to instruct FNNs, and this tactic was trustworthy and stable for gathering appropriate solutions and training variables [22]. Liu et al. employed an adaptive momental bound method to instruct FNNs, and this tactic maintained remarkable durability and adaptability to deliver better assessment metrics and greater categorization accuracy [23]. Atta et al. employed a modified weighted chimp optimization methodology to instruct FNNs, and this tactic showed desirable feasibility and reliability in distinguishing the most suitable weight and bias [24]. Baştemur et al. crafted a marine predator approach to instruct FNNs, and this tactic maintained influential mining and extraction to generate the most advantageous theoretical solution [25]. Li et al. introduced a coevolutionary learning algorithm to instruct FNNs, and this tactic featured a minimal material distribution, strong practicality, and adequate contraction productivity [26]. Yang et al. constructed a pelican optimization algorithm to instruct FNNs, and this tactic featured a profound, broad investigation and comprehension to recognize the most appropriate measured solution and integration precision [27]. Zhang et al. evolved a genetic algorithm with the feature selection method to instruct FNNs, and this tactic was claimed to be more desirable and adaptable in accomplishing an acceptable transmission error and appropriate quantum connectivity [28]. Wang et al. discovered an alternative gradient compression algorithm to instruct FNNs, and this tactic exhibited substantial population variation and expansive recognition to acquire outstanding evaluation efficiency and convergence acceleration [29]. To summarize, the traditional techniques exhibit insufficient resolution efficiency, substantial resource waste, frequent unanticipated convergence, and multidimensional proliferation. Metaheuristic algorithms have the advantages of upward utilization productivity, superb adaptability, high stabilization, potent self-regulation, flexible extension, simplicity, robust parallelism, and an accessible algorithmic combination. Adding advanced search strategies, adopting different encoding forms, and combining them with other algorithms can enhance the convergence speed and calculation accuracy.

The TSA is based on gradient-free optimization technology that mimics the jet motorization and swarm scavenging of tunicates to recognize food resources and acquire a universal desirable solution [30]. The differential sequencing alteration operator is incorporated into the TSA to enhance the availability and practicability, which can remedy the marginalized estimated precision, sluggish cooperation efficiency, and stagnation of intuitive anticipation. The ETSA, with symmetric cooperative swarms, is deployed to instruct FNNs, aiming to customize the deviation thresholds between neurons and the connection weights between various layers according to the transmission error between the anticipated input and the authentic output, which accomplishes the lowest dissemination error and desired neural layout. The ETSA exhibits the sustainability and reproducibility of a straightforward algorithm architecture, excellent control parameters, great traversal efficiency, strong stability, and easy implementation to avoid unanticipated convergence and to identify a consistent solution. The ETSA integrates exploration or exploitation to determine the best solution. The experimental results show that the ETSA integrates exploration and exploitation to enhance stability and robustness alongside exhibiting effectiveness and feasibility to obtain a faster convergence speed, superior calculation accuracy, and greater categorization accuracy.

The main contributions are summarized as follows: (1) An enhanced tunicate swarm algorithm (TSA) is utilized to train FNNs. (2) The differential sequencing alteration operator can broaden the identification scope, enrich population creativity, expedite computational productivity, and avoid search stagnation. (3) The ETSA integrates utilization and extraction to generate the most advantageous transmission error and has strong stability and robustness to accelerate optimized efficiency. (4) The experimental results confirm that the ETSA exhibits remarkable feasibility and superiority in obtaining a faster convergence speed, superior calculation accuracy, greater categorization accuracy, higher categorization precision, and less transmission errors.

This article is separated into the following sections: Section 2 summarizes the conceptual description of FNNs. Section 3 details the TSA. Section 4 measures the ETSA. Section 5 illustrates ETSA-based FNNs. The experimental results and analysis are presented in Section 6. The conclusions and future research are mentioned in Section 7.

2. Conceptual Description of Feedforward Neural Networks

FNNs are constructed from numerous neurons distributed in the input, hidden, and output layers, using any activation function. The node connections are one-way transmissions. Figure 1 establishes three-layer FNNs.

The net input of node j is estimated as

s j = i = 1 n ( W i , j X i ) θ j , j = 1 , 2 , , h

where n is the node size, W i j is the connection weight, X i is the i t h input node, and θ j is the j t h hidden node’s deviation threshold.

The hidden layer’s output value is estimated as

S j = s i g m o i d ( s j ) = 1 ( 1 + e x p ( s j ) ) , j = 1 , 2 , , h

The net input of node k is estimated as

o k = j = 1 h ( w j , k S j ) θ k , k = 1 , 2 , , m

O k = s i g m o i d ( o k ) = 1 ( 1 + e x p ( o k ) ) , k = 1 , 2 , , m

where h is the number of hidden nodes, w j k is the connection weight from the j t h hidden node to the k t h output node, S j is the output of the hidden node j, and θ k is the k t h output node’s deviation threshold.

3. Tunicate Swarm Algorithm

The TSA simulates the jet motorization and swarm scavenging of tunicates to enhance computational efficiency and determine the globally accurate solution. Jet motorization utilizes its gravity, the advection of deep ocean currents, and interaction forces to avoid individuals’ search conflicts, which have muscular mobility to achieve the best location. Jet propulsion contains three components: avoiding searching for individual conflicts, moving toward the best individual area, and remaining close to the best individual. Swarm scavenging utilizes the tunicate’s neural perception of water flow changes and individual luminescence to determine the companions’ location and move closer to the target food source, thereby achieving the group’s goal of successfully finding food. Figure 2 emphasizes the tunicate’s swarm foraging.

3.1. Avoiding Searching for Individual Conflicts

To avoid conflicts and overlaps between individuals, A is utilized to enhance the computational efficiency, accelerate the convergence accuracy, and calculate each tunicate’s new location, which is estimated as

A = G M

G = c 2 + c 3 F

F = 2 c 1

where G is the gravitational force; F is the turbulent ocean currents’ advection; and c 1 , c 2 , and c 3 are random variables in [0, 1].

The vector M denotes the interaction force, which is estimated as

M = P min + c 1 ( P max P min )

where P min = 1 is the minimum speed, and P max = 4 is the maximum speed.

3.2. Movement toward the Direction of the Best Neighbor

To avoid neighborhood conflicts, the search individual moves toward the optimal tunicate position, improving the search efficiency and accuracy. The separation between the food resources and the predator must be assessed to locate the most appropriate direction, which is estimated as

P D = | F S r a n d P P ( t ) |

where P D is the distance, F S is the food source, t symbolizes the iteration, P P ( t ) is the search individual, and r a n d is a random variable in [0, 1].

3.3. Convergence toward the Best Individual

Each tunicate moves toward the best individual to optimize the search efficiency. The ideal tunicate has a higher fitness solution and represents a better candidate value, accelerating the search process and increasing the accuracy of the solution’s probability.

P P ( t ) = { F S + A P D , i f r a n d 0.5 F S A P D , i f r a n d < 0.5

where P P ( t ) is the corresponding movement’s location.

3.4. Swarm Behavior

The search individual adopts swarm behavior to gather around the food source and to adjust their position, which retains optimal and suboptimal solutions to improve the search efficiency and to determine the best location.

P P ( t + 1 ) = P P ( t ) 2 + c 1 1

where P P ( t + 1 ) is the reorganized location of the tunicate’s next generation relative to the food supply.

Algorithm 1 portrays the pseudocode of the TSA.

Algorithm 1: TSA
Input: Tunicate population P P
Output: The most satisfactory solution F S
Procedure: TSA
Initializing the parameters A, G, F, M, and T
Set P min 1 , P max 4 , and S w a r m 0
while t < T do
for i 1 to 2 do /Estimating swarm scavenging behavior via looping/
F S C o m p u t e F i t n e s s ( P P ) /Estimating the fitness value/
/Jet motorization behavior/
c 1 , c 2 , c 3 , r a n d R a n d ( ) / R a n d ( ) symbolizes a random variable in [0, 1]/
M P min + c 1 ( P max P min ) , F 2 c 1 , G c 2 + c 3 F
A G / M , P D | F S r a n d P P ( t ) |
/Swarm scavenging behavior/
if ( r a n d 0.5 ) then
S w a r m S w a r m + F S + A P D
else
S w a r m S w a r m + F S A P D
end if
end for
P P ( t ) S w a r m / ( 2 + c 1 )
S w a r m 0
Amending the parameters A, G, F and M
t t + 1
end while
return F S
end procedure
procedure C o m p u t e F i t n e s s ( P P )
for i 1 to ndo / n symbolizes the dimension/
F I T P [ i ] F i t n e s s F u n c t i o n ( P P ( i , : ) ) /Estimating the fitness value of predator/
end for
F I T P b e s t B E S T ( F I T P [ ] ) /Estimating the best fitness value via Best function/
return F I T P b e s t
end procedure
procedure B E S T ( F I T P )
B e s t F I T P [ 0 ]
for i 1 to ndo
if ( F I T P [ i ] < B e s t ) then
B e s t F I T P [ i ]
end if
end for
return B e s t /Return the best fitness value/
end procedure

4. Enhanced Tunicate Swarm Algorithm

The differential sequencing alteration operator exhibits magnificent sustainability and dependability to mitigate slow anticipation, strengthen acquainted extraction, and screen out top-performing tunicates [31]. The optimal selected search agents are sorted according to the fitness value of each tunicate, i.e., the population is sorted in ascending order (i.e., from best to worst) based on the fitness value of each search agent, which is estimated as

R i = N i , i = 1 , 2 , , N

where N is the population size. The higher the ranking, the better the search agent. Each tunicate is sorted, and the selection probability P i is estimated as

p i = R i N , i = 1 , 2 , , N

Algorithm 2 portrays the differential sequencing alteration operator “DE/rand/1”. Individuals with a higher ranking have a greater probability of being selected as the basis vector or terminal vector in the mutation operator, and the purpose is to spread good information about the population to future generations. The differential sequencing alteration operator is not based on the selection probability to select the starting vector. If the two vectors of the differential vector are both selected from the higher-ranked vectors, the search step size of the differential vector may rapidly decrease and lead to a local optimum.

Symmetry is a remarkable property in algorithms and applications, which reveals patterns within data, simplifies complex issues, and enhances computational efficiency. In this paper, we draw inspiration from symmetric cooperative swarms. During the optimization process, the entire tunicate swarm is divided based on the fitness values, and the population is sorted in ascending order (i.e., from best to worst), which enables the search agent to undertake different responsibilities and tasks and to promote the exploration and exploitation of the global solution.

Algorithm 2: Differential sequencing alteration operator of “DE/rand/1”
Sorting population, estimating fitness value, and furnishing screening probability P i
Arbitrarily extracting r 1 { 1 , N } {base vector index}
while r a n d > p r 1 ( p r 1 = 0.5 ) o r r 1 = = i
Arbitrarily extracting r 1 { 1 , N }
end
Arbitrarily extracting r 2 { 1 , N } {terminal vector index}
while r a n d > p r 2 ( p r 2 = 0.5 ) o r r 2 = = r 1 o r r 2 = = i
Arbitrarily extracting r 2 { 1 , N }
end
Arbitrarily extracting r 3 { 1 , N } {commencing vector index}
while r 3 = = r 2 o r r 3 = = r 1 o r r 3 = = i
Arbitrarily extracting r 3 { 1 , N }
end

5. Enhanced Tunicate Swarm Algorithm-Based Feedforward Neural Networks

The overarching objective is to acquire the most realistic solution and the highest categorization of retraining test samples. The transmission error is to estimate the disparity between the anticipated input and the authentic output. The first step is the metaheuristic algorithm variables. There are three ways to represent weights and biases: vector, matrix, and binary. The mathematical models considering the ETSA algorithm are all represented by vectors, so we use a vector representation. The vector V symbolizes the connection weight and deviation threshold, which is estimated as

V = { W , θ } = { W 1 , 1 , W 1 , 2 , , W n , n , h , θ 1 , θ 2 , , θ h }

where n is the input nodes, W i , j is the connection weight, and θ j is the deviation threshold.

After expressing the weights and biases of the FNN as vectors, the objective function of the neural network needs to be used to define the fitness value of the ETSA. Training an FNN aims to achieve the highest classification, approximation, or prediction accuracy for the training and test samples. Generally, performance is calculated as the difference between a network’s desired and actual outputs. The mean squared error (MSE) is estimated as

M S E = i = 1 m ( o i k d i k ) 2

where m is the output nodes, d i k is the desired output, and o i k is the actual solution.

FNNs require more sample datasets for training and evaluation. The average MSE value is estimated as

M S E ¯ = k = 1 s i = 1 m ( o i k d i k ) 2 s

where s is the sample dataset.

The objective fitness of retraining FNNs is estimated as

M i n i m i z e : F ( V ¯ ) = M S E ¯

Algorithm 3 portrays EMPA-based FNNs. Figure 3 emphasizes the flowchart of the ETSA for FNNs.

Algorithm 3: ETSA-based FNNs
Input: Tunicate population P P
Output: The most satisfactory solution F S
Procedure: TSA
Initializing the parameters A, G, F, M, and T and the structure of FNNs, each tunicate symbolizes the connection weight and deviation threshold
Set P min 1 , P max 4 , and S w a r m 0
Estimating the tunicate’s fitness value via Equation (17)
Identify the best tunicate
while t < T do
Sorting population, estimating fitness value, and furnishing screening probability P i
/differential sequencing alteration operator stage/
Arbitrarily extracting r 1 { 1 , N } {base vector index}
while r a n d > p r 1 ( p r 1 = 0.5 ) o r r 1 = = i
Arbitrarily extracting r 1 { 1 , N }
end
Arbitrarily extracting r 2 { 1 , N } {terminal vector index}
while r a n d > p r 2 ( p r 2 = 0.5 ) o r r 2 = = r 1 o r r 2 = = i
Arbitrarily extracting r 2 { 1 , N }
end
Arbitrarily extracting r 3 { 1 , N } {commencing vector index}
while r 3 = = r 2 o r r 3 = = r 1 o r r 3 = = i
Arbitrarily extracting r 3 { 1 , N }
end /end of differential sequencing alteration operator stage/
for i 1 to 2 do /Estimating swarm scavenging behavior via looping/
F S C o m p u t e F i t n e s s ( P P ) /Estimating the fitness value/
/Jet motorization behavior/
c 1 , c 2 , c 3 , r a n d R a n d ( ) / R a n d ( ) symbolizes a random variable in [0, 1]/
M P min + c 1 ( P max P min ) , F 2 c 1 , G c 2 + c 3 F
A G / M , P D | F S r a n d P P ( t ) |
/Swarm scavenging behavior/
if ( r a n d 0.5 ) then
S w a r m S w a r m + F S + A P D
else
S w a r m S w a r m + F S A P D
end if
end for
P P ( t ) S w a r m / ( 2 + c 1 )
S w a r m 0
Amending the parameters A, G, F and M
Estimating the tunicate’s fitness value via Equation (17)
t t + 1
end while
return F S
end procedure
procedure C o m p u t e F i t n e s s ( P P )
for i 1 to ndo / n symbolizes the dimension/
F I T P [ i ] F i t n e s s F u n c t i o n ( P P ( i , : ) ) /Estimating the fitness of predator/
end for
F I T P b e s t B E S T ( F I T P [ ] ) /Estimating the best fitness value via Best function/
return F I T P b e s t
end procedure
procedure B E S T ( F I T P )
B e s t F I T P [ 0 ]
for i 1 to ndo
if ( F I T P [ i ] < B e s t ) then
B e s t F I T P [ i ]
end if
end for
return B e s t /Return the best fitness value/
end procedure

Computational Complexity

Time complexity: The population amount in the ETSA is symbolized by N, the permitted repetition by T, and the outcome dimension by D. The initialization necessitates O ( N × D ) . Additionally, the ETSA measures each tunicate’s fitness that requires O ( N × D × T ) . Ultimately, the ETSA administers fundamental operation-related procedures that necessitate O ( M ) , where M symbolizes the jet motorization and swarm scavenging of the tunicate for superior discovery and extraction. Hence, the ETSA’s cumulative time complexity is necessitated, O ( N × D × T × M ) .

Space complexity: This specifies the methodology’s comprehensive volume of space occupied. The ETSA’s cumulative space complexity necessitates O ( N × D ) . The ETSA maintains tremendous durability and adaptability to accomplish the most advantageous solution.

6. Experimental Results and Analysis

6.1. Experimental Setup

The numerical experiment was implemented on a computer with a LAPTOP-OAO85F38 device, a 12th Gen Intel(R) Core(TM) i9-12900HX 2.30 GHz processor, and 64 GB of memory with a Windows 11 system. All algorithms were programmed in MATLAB R2018b.

6.2. Test Samples

The test samples were extracted from the automated learning storage at the University of California, Irvine (UCI) to investigate the dependability and durability of the ETSA. Table 1 conveys the descriptions of the samples.

6.3. Parameter Configuration

The configured parameters remained exemplary empirical solutions extracted from the source publications. The ETSA was distinguished from ETTAO based on the differential sequencing alteration operator [31]; EPSA based on the simplex method [32]; and SABO, SAO, and EWWPA based on the Lévy flight strategy [31] and the elite opposition learning strategy [32], YDSE, and TSA. Table 2 conveys the configured parameters of each technique.

6.4. Results and Analysis

For each technique, the population amount was 30, the permitted repetition was 500, and the consecutive execution was 20. Best, Worst, Mean, and Std symbolize the optimal value, worst value, mean value, and standard deviation. The ranking was based on accuracy, where accuracy symbolizes the categorization quantification. These assessment metrics could completely convey the comprehensive superiority and predictability of each technique.

Table 3 conveys the experimental results of the extensive samples. The comparative techniques were regulated to instruct the FNNs, aiming to customize the deviation thresholds between neurons and the connection weights between various layers according to the transmission error between the anticipated input and the authentic output, which accomplishes the lowest dissemination error and desired neural layout. By retraining extensive samples, the ETSA was distinguished from alternative techniques to validate its scalability and profitability. For Scale, Seeds, Cancer, Diabetes, Parkinson, and Zoo, the Best, Worst, Mean, and Std of the ETSA were more productive than those of ETTAO, EPSA, SABO, SAO, EWWPA, YDSE, and TSA. The assessed solutions of the ETSA were substantially boosted, and in contrast with the TSA, the ETSA maintained outstanding sustainability and adaptability. The ETSA received the highest cumulative categorization accuracy and ranking. The ETSA exhibited substantial practicality and constructiveness in refraining from investigating stalemates and recognizing the most appropriate detection solution. For Balloon, the SAO, YDSE, and ETSA exhibited satisfactory trustworthiness and dependability in delivering international customized solutions. The other measured solutions of the ETSA were more productive than those of ETTAO, EPSA, SABO, EWWPA, and TSA. The ETSA maintained a distinguished ranking and the most remarkable categorization accuracy, which received a trustworthy investigation and extraction to maximize search productivity and promote estimated precision. For Blood, Survival, Liver, Iris, and Splice, the Best, Worst, and Mean of the ETSA were more productive than those of ETTAO, EPSA, SABO, SAO, EWWPA, YDSE, and TSA, and the estimated solutions of the ETSA were substantially strengthened compared with the TSA. For Wine and Statlog, the Best of the ETSA was weaker than those of SABO, but the Best, Worst, and Mean of the ETSA were more productive than those of ETTAO, EPSA, SAO, EWWPA, YDSE, and TSA. The Std of the ETSA was better than those of the other algorithms. The categorization accuracy and ranking of the ETSA were more productive than those of the other techniques. For Gene and WDBC, the Best, Std, Worst, Mean, and categorization accuracy of the ETSA were more accurate than those of the other techniques. For XOR, the ETSA guaranteed outstanding dependability and sustainability to recognize the most appropriate recognition solution. The Best, Worst, Mean, Std, and categorization accuracy of the ETSA were of higher quality than those of ETTAO, EPSA, SABO, SAO, EWWPA, YDSE, and TSA. The differential sequencing alteration operator demonstrates an adaptable localized extraction and search screening to broaden the identification scope, enrich population creativity, expedite computational productivity, and avoid search stagnation. The ETSA mimics jet motorization and swarm scavenging to mitigate directional collisions and to maintain an optimal solution that is customized and regional. The ETSA has a simplified-technique construction, higher-quality configured parameters, excellent extraction productivity, dependable stability, and intuitive implementation. The ETSA exhibits phenomenal sustainability and practicality to accelerate the estimation efficiency and to establish the transmission error. Moreover, it juxtaposes extraction and investigation to accomplish a faster convergence speed, greater measured precision, and superior categorization accuracy.

The Wilcoxon rank-sum test was established to differentiate the ETSA from the other techniques [33]. p < 0.05 was a noteworthy variation, p 0.05 was a non-noteworthy variation, and N/A symbolizes “not applicable”. Table 4 conveys the results of the p-value Wilcoxon rank-sum test.

Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19 and Figure 20 emphasize the interaction lines of the ETSA and other techniques via extensive samples. The interaction lines aimed to assess the optimized productivity and migration investigation, which intuitively highlighted the estimated precision and resolution velocity of the FNNs mastered by the ETSA and other techniques and genuinely explored the repetition procedure and practicality of multiple methods. The approximated solutions of the ETSA for the Blood, Scale, Survival, Liver, Seeds, Wine, Iris, Statlog, XOR, Balloon, Cancer, Diabetes, Gene, Parkinson, Splice, WDBC, and Zoo samples were substantially augmented compared with the TSA. The majority of the Best, Worst, Mean, and Std of the ETSA were more substantial than those of ETTAO, EPSA, SABO, SAO, EWWPA, YDSE, and TSA. The ETSA maintained the most essential categorization accuracy and ranking in extensive techniques. The convergence velocity and estimation accuracy of the ETSA were more productive than those of ETTAO, EPSA, SABO, SAO, EWWPA, YDSE, and TSA, and the ETSA emphasized unparalleled reliability and practicability in disregarding search inhibition and accomplishing an affordable error cost and an appropriate network layout. The attraction effectiveness and established solutions of the ETSA outperformed those of ETTAO, EPSA, SABO, SAO, EWWPA, YDSE, and TSA. The ETSA maintained outstanding consistency and trustworthiness in customizing the layers’ connection weights and neurons’ deviation thresholds per the transmission error between the anticipated input and the authentic output. The ETSA mimics jet motorization and swarm scavenging of tunicates to mitigate directional collisions and restrict exaggerated convergence alongside manipulating the investigation and extraction to renew the position of each tunicate toward the best solution and to identify the most remarkable solutions. The ETSA exhibits substantial durability and adaptability to instruct FNNs and acquires the most advantageous transmission errors according to the regulated connection weight and deviation threshold, which is a feasible and successful technique for instructing FNNs.

Figure 21, Figure 22, Figure 23, Figure 24, Figure 25, Figure 26, Figure 27, Figure 28, Figure 29, Figure 30, Figure 31, Figure 32, Figure 33, Figure 34, Figure 35, Figure 36 and Figure 37 emphasize the ANOVA evaluation of the ETSA and other techniques via extensive samples. The standard deviation remains an essential instrument for quantifying the magnitude of the average variation of samples, which is required to appropriately portray the consistency and predictability of comparative techniques in instructing FNNs. This method involves fundamental discovery and extraction to receive more trustworthy empirical data registered by the diminished standard deviation. For the Blood, Scale, Survival, Liver, Seeds, Wine, Iris, Statlog, XOR, Balloon, Cancer, Diabetes, Gene, Parkinson, Splice, WDBC, and Zoo datasets, most Std values of the ETSA were more substantial than those of ETTAO, EPSA, SABO, SAO, EWWPA, YDSE, and TSA. The ETSA demonstrated outstanding consistency and sustainability to mitigate discovery inhibition and upgrade exploratory productivity. The differential sequencing alteration operator was incorporated into the TSA to remedy the marginalized estimated precision, slow cooperation efficiency, and stagnation of intuitive anticipation. The ETSA integrates the jet motorization and swarm scavenging of tunicates to establish the most appropriate spot and to arrive at an accurate, customized solution. The ETSA delivers superior integration effectiveness and a more substantial solution to recognize an approximate steady standard deviation. Most of the Best, Worst, Mean, Std, categorization accuracy, and ranking values of the ETSA were more significant than those of ETTAO, EPSA, SABO, SAO, EWWPA, YDSE, and TSA. The ETSA combines outstanding dependability and deliverability to extend customized productivity and acquire an accurate expansive solution.

Statistically, the ETSA with symmetric cooperative swarms was employed to resolve the FNNs for the following reasons. First, the ETSA features a straightforward algorithm architecture, excellent control parameters, great traversal efficiency, strong stability, and easy implementation. Second, jet motorization leverages its gravitational pull, turbulent ocean currents’ advection, and collaboration force to refuse to investigate conflicts and to upgrade the distinctive mobility toward the most appropriate location. Swarm scavenging utilizes the tunicate’s neural perception of variations in water circulation and distinct luminosity to recognize its colleagues’ location and migrate together toward the food source. Third, the ranking-based mutation operator was introduced into the TSA. The differential sequencing alteration operator has adaptable localized extraction and search screening to broaden the identification scope, enrich population creativity, expedite computation productivity, and avoid search stagnation. The ETSA integrates exploration and exploitation to mitigate search stagnation, which has sufficient stability and flexibility to acquire the finest solution. To summarize, the ETSA yields a faster convergence speed, superior calculation accuracy, and greater categorization accuracy.

7. Conclusions and Future Research

In this paper, an enhanced TSA based on a differential sequencing alteration operator was characterized to instruct FNNs, aiming to determine the best combination of the connection weight and deviation value according to the transmission error between the anticipated input and the authentic output. The differential sequencing alteration operator has adaptable localized extraction and search screening to broaden the identification scope, enrich population creativity, expedite computational productivity, and avoid search stagnation, which enhances the selection probability to filter out the optimal search agent. The ETSA with symmetric cooperative swarms exhibits a simplified-technique construction, higher-quality configured parameters, excellent extraction productivity, dependable stability, and intuitive implementation. The ETSA integrates the jet motorization and swarm scavenging of tunicates to explore superior computational alternatives and more accurate categorization precision. The ETSA integrates utilization and extraction to discover the most advantageous transmission error and has strong stability and robustness to accelerate optimized efficiency. Most of the ETSA’s computational solutions were superior to those of ETTAO, EPSA, SABO, SAO, EWWPA, YDSE, and TSA. The experimental results confirm that the ETSA exhibits remarkable feasibility and superiority in cultivating a faster convergence speed, superior calculation accuracy, higher categorization precision, and less transmission errors and is a productive and adequate technique for instructing FNNs.

In future research, the TSA will be studied regarding the following aspects: First, the convergence speed and calculation accuracy of the TSA will be enhanced by introducing innovative search strategies, employing unique coding methods, and integrating it with other algorithms. Second, the ETSA will be investigated for its sustainability and dependability by retraining multiple neural network layouts, such as convolutional neural networks, recurrent neural networks, generative adversarial networks, graph neural networks, long short-term memory, Hopfield networks, deconvolutional networks, and recurrent neural networks. Third, the modified TSA will be used for the intelligent detection and intelligent control of distinctive understory crops. We will utilize under-forest crop harvesting and planting machinery and intelligence, precision plant-protection equipment, and intelligence to achieve under-forest crop intelligent and portable machinery and equipment. This research will allow us to fine-tune extractions and exploration to deliver the most advantageous universal measured solution.

Author Contributions

Conceptualization, J.Z. and C.D.; methodology, J.Z.; software, C.D.; validation, J.Z. and C.D.; formal analysis, J.Z.; investigation, C.D.; resources, C.D.; data curation, J.Z.; writing—original draft preparation, J.Z.; writing—review and editing, C.D.; visualization, C.D.; supervision, J.Z.; project administration, J.Z.; funding acquisition, J.Z. and C.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Start-up Fund for Distinguished Scholars of West Anhui University under grant no. WGKQ2022052, School-level Quality Engineering (School-enterprise Cooperation Development Curriculum Resource Construction) under grant no. wxxy2022101, School-level Quality Engineering (Teaching and Research Project) under grant no. wxxy2023079, and PWMDIC design and application under grant no. WXCHX0045023110.

Data Availability Statement

All data used to support the findings of this study are included within the article.

Acknowledgments

The authors would like to thank everyone involved for their contributions to this article. They would also like to thank the editors and anonymous reviewers for their helpful comments and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhao, S.; Zhang, T.; Cai, L.; Yang, R. Triangulation Topology Aggregation Optimizer: A Novel Mathematics-Based Meta-Heuristic Algorithm for Continuous Optimization and Engineering Applications. Expert Syst. Appl. 2024, 238, 121744. [Google Scholar] [CrossRef]
  2. Qais, M.H.; Hasanien, H.M.; Alghuwainem, S.; Loo, K.H. Propagation Search Algorithm: A Physics-Based Optimizer for Engineering Applications. Mathematics 2023, 11, 4224. [Google Scholar] [CrossRef]
  3. Trojovskỳ, P.; Dehghani, M. Subtraction-Average-Based Optimizer: A New Swarm-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Biomimetics 2023, 8, 149. [Google Scholar] [CrossRef] [PubMed]
  4. Deng, L.; Liu, S. Snow Ablation Optimizer: A Novel Metaheuristic Technique for Numerical Optimization and Engineering Design. Expert Syst. Appl. 2023, 225, 120069. [Google Scholar] [CrossRef]
  5. Abdelhamid, A.A.; Towfek, S.; Khodadadi, N.; Alhussan, A.A.; Khafa*ga, D.S.; Eid, M.M.; Ibrahim, A. Waterwheel Plant Algorithm: A Novel Metaheuristic Optimization Method. Processes 2023, 11, 1502. [Google Scholar] [CrossRef]
  6. Abdel-Basset, M.; El-Shahat, D.; Jameel, M.; Abouhawwash, M. Young’s Double-Slit Experiment Optimizer: A Novel Metaheuristic Optimization Algorithm for Global and Constraint Optimization Problems. Comput. Methods Appl. Mech. Eng. 2023, 403, 115652. [Google Scholar] [CrossRef]
  7. Gölcük, İ.; Ozsoydan, F.B.; Durmaz, E.D. An Improved Arithmetic Optimization Algorithm for Training Feedforward Neural Networks under Dynamic Environments. Knowl.-Based Syst. 2023, 263, 110274. [Google Scholar] [CrossRef]
  8. Bilski, J.; Smoląg, J.; Kowalczyk, B.; Grzanek, K.; Izonin, I. Fast Computational Approach to the Levenberg-Marquardt Algorithm for Training Feedforward Neural Networks. J. Artif. Intell. Soft Comput. Res. 2023, 13, 45–61. [Google Scholar] [CrossRef]
  9. Weber, P.; Wagner, W.; Freitag, S. Physically Enhanced Training for Modeling Rate-Independent Plasticity with Feedforward Neural Networks. Comput. Mech. 2023, 72, 827–857. [Google Scholar] [CrossRef]
  10. Konar, D.; Sarma, A.D.; Bhandary, S.; Bhattacharyya, S.; Cangi, A.; Aggarwal, V. A Shallow Hybrid Classical–Quantum Spiking Feedforward Neural Network for Noise-Robust Image Classification. Appl. Soft Comput. 2023, 136, 110099. [Google Scholar] [CrossRef]
  11. Elansari, T.; Ouanan, M.; Bourray, H. Mixed Radial Basis Function Neural Network Training Using Genetic Algorithm. Neural Process. Lett. 2023, 55, 10569–10587. [Google Scholar] [CrossRef]
  12. Chai, J.; Bi, M.; Teng, X.; Yang, G.; Hu, M. A Mixed Mutation Strategy Genetic Algorithm for the Effective Training and Design of Optical Neural Networks. Opt. Fiber Technol. 2024, 82, 103600. [Google Scholar] [CrossRef]
  13. Pan, S.; Gupta, T.K.; Raza, K. BatTS: A Hybrid Method for Optimizing Deep Feedforward Neural Network. PeerJ Comput. Sci. 2023, 9, e1194. [Google Scholar] [CrossRef] [PubMed]
  14. Lai, W.Y.; Kuok, K.K.; Gato-Trinidad, S.; Rahman, M.R.; Bakri, M.K. Metaheuristic Algorithms to Enhance the Performance of a Feedforward Neural Network in Addressing Missing Hourly Precipitation. Int. J. Integr. Eng. 2023, 15, 273–285. [Google Scholar] [CrossRef]
  15. Bemporad, A. Training Recurrent Neural Networks by Sequential Least Squares and the Alternating Direction Method of Multipliers. Automatica 2023, 156, 111183. [Google Scholar] [CrossRef]
  16. Wang, L.; Ye, W.; Zhu, Y.; Yang, F.; Zhou, Y. Optimal Parameters Selection of Back Propagation Algorithm in the Feedforward Neural Network. Eng. Anal. Bound. Elem. 2023, 151, 575–596. [Google Scholar] [CrossRef]
  17. Bülbül, M.A. Optimization of Artificial Neural Network Structure and Hyperparameters in Hybrid Model by Genetic Algorithm: IOS–Android Application for Breast Cancer Diagnosis/Prediction. J. Supercomput. 2024, 80, 4533–4553. [Google Scholar] [CrossRef]
  18. Javanshir, A.; Nguyen, T.T.; Mahmud, M.P.; Kouzani, A.Z. Training Spiking Neural Networks with Metaheuristic Algorithms. Appl. Sci. 2023, 13, 4809. [Google Scholar] [CrossRef]
  19. Schreuder, A.; Bosman, A.; Engelbrecht, A.; Cleghorn, C. Training Feedforward Neural Networks with Bayesian Hyper-Heuristics. arXiv 2023, arXiv:2303.16912. [Google Scholar]
  20. Özden, A.; İşeri, İ. COOT Optimization Algorithm on Training Artificial Neural Networks. Knowl. Inf. Syst. 2023, 65, 3353–3383. [Google Scholar] [CrossRef]
  21. Maddaiah, P.N.; Narayanan, P.P. An Improved Cuckoo Search Algorithm for Optimization of Artificial Neural Network Training. Neural Process. Lett. 2023, 55, 12093–12120. [Google Scholar] [CrossRef]
  22. Arthur, B.J.; Kim, C.M.; Chen, S.; Preibisch, S.; Darshan, R. A Scalable Implementation of the Recursive Least-Squares Algorithm for Training Spiking Neural Networks. Front. Neuroinformatics 2023, 17, 1099510. [Google Scholar] [CrossRef] [PubMed]
  23. Liu, Y.; Li, D. AdaXod: A New Adaptive and Momental Bound Algorithm for Training Deep Neural Networks. J. Supercomput. 2023, 79, 17691–17715. [Google Scholar] [CrossRef]
  24. Atta, E.A.; Ali, A.F.; Elshamy, A.A. A Modified Weighted Chimp Optimization Algorithm for Training Feed-Forward Neural Network. PLoS ONE 2023, 18, e0282514. [Google Scholar] [CrossRef]
  25. Baştemur Kaya, C. On Performance of Marine Predators Algorithm in Training of Feed-Forward Neural Network for Identification of Nonlinear Systems. Symmetry 2023, 15, 1610. [Google Scholar] [CrossRef]
  26. Li, H.; Bai, L.; Gao, W.; Xie, J.; Huang, L. Many-Objective Coevolutionary Learning Algorithm with Extreme Learning Machine Auto-Encoder for Ensemble Classifier of Feedforward Neural Networks. Expert Syst. Appl. 2024, 246, 123186. [Google Scholar] [CrossRef]
  27. Yang, B.; Liang, B.; Qian, Y.; Zheng, R.; Su, S.; Guo, Z.; Jiang, L. Parameter Identification of PEMFC via Feedforward Neural Network-Pelican Optimization Algorithm. Appl. Energy 2024, 361, 122857. [Google Scholar] [CrossRef]
  28. Zhang, R.; Ma, X.; Zhang, C.; Ding, W.; Zhan, J. GA-FCFNN: A New Forecasting Method Combining Feature Selection Methods and Feedforward Neural Networks Using Genetic Algorithms. Inf. Sci. 2024, 669, 120566. [Google Scholar] [CrossRef]
  29. Wang, Z.; Duan, Q.; Xu, Y.; Zhang, L. An Efficient Bandwidth-Adaptive Gradient Compression Algorithm for Distributed Training of Deep Neural Networks. J. Syst. Archit. 2024, 150, 103116. [Google Scholar] [CrossRef]
  30. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A New Bio-Inspired Based Metaheuristic Paradigm for Global Optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  31. Yan, Z.; Zhang, J.; Zeng, J.; Tang, J. Nature-Inspired Approach: An Enhanced Whale Optimization Algorithm for Global Optimization. Math. Comput. Simul. 2021, 185, 17–46. [Google Scholar] [CrossRef]
  32. Yan, Z.; Zhang, J.; Tang, J. Path Planning for Autonomous Underwater Vehicle Based on an Enhanced Water Wave Optimization Algorithm. Math. Comput. Simul. 2021, 181, 192–241. [Google Scholar] [CrossRef]
  33. Dao, P.B. On Wilcoxon Rank Sum Test for Condition Monitoring and Fault Detection of Wind Turbines. Appl. Energy 2022, 318, 119209. [Google Scholar] [CrossRef]

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (1)

Figure 1. Three-layer FNNs.

Figure 1. Three-layer FNNs.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (2)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (3)

Figure 2. Tunicate’s swarm foraging.

Figure 2. Tunicate’s swarm foraging.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (4)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (5)

Figure 3. Flowchart of ETSA for FNNs.

Figure 3. Flowchart of ETSA for FNNs.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (6)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (7)

Figure 4. Interaction lines of Blood.

Figure 4. Interaction lines of Blood.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (8)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (9)

Figure 5. Interaction lines of Scale.

Figure 5. Interaction lines of Scale.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (10)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (11)

Figure 6. Interaction lines of Survival.

Figure 6. Interaction lines of Survival.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (12)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (13)

Figure 7. Interaction lines of Liver.

Figure 7. Interaction lines of Liver.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (14)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (15)

Figure 8. Interaction lines of Seeds.

Figure 8. Interaction lines of Seeds.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (16)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (17)

Figure 9. Interaction lines of Wine.

Figure 9. Interaction lines of Wine.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (18)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (19)

Figure 10. Interaction lines of Iris.

Figure 10. Interaction lines of Iris.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (20)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (21)

Figure 11. Interaction lines of Statlog.

Figure 11. Interaction lines of Statlog.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (22)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (23)

Figure 12. Interaction lines of XOR.

Figure 12. Interaction lines of XOR.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (24)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (25)

Figure 13. Interaction lines of Balloon.

Figure 13. Interaction lines of Balloon.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (26)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (27)

Figure 14. Interaction lines of Cancer.

Figure 14. Interaction lines of Cancer.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (28)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (29)

Figure 15. Interaction lines of Diabetes.

Figure 15. Interaction lines of Diabetes.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (30)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (31)

Figure 16. Interaction lines of Gene.

Figure 16. Interaction lines of Gene.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (32)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (33)

Figure 17. Interaction lines of Parkinson.

Figure 17. Interaction lines of Parkinson.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (34)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (35)

Figure 18. Interaction lines of Splice.

Figure 18. Interaction lines of Splice.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (36)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (37)

Figure 19. Interaction lines of WDBC.

Figure 19. Interaction lines of WDBC.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (38)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (39)

Figure 20. Interaction lines of Zoo.

Figure 20. Interaction lines of Zoo.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (40)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (41)

Figure 21. ANOVA evaluation of Blood.

Figure 21. ANOVA evaluation of Blood.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (42)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (43)

Figure 22. ANOVA evaluation of Scale.

Figure 22. ANOVA evaluation of Scale.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (44)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (45)

Figure 23. ANOVA evaluation of Survival.

Figure 23. ANOVA evaluation of Survival.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (46)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (47)

Figure 24. ANOVA evaluation of Liver.

Figure 24. ANOVA evaluation of Liver.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (48)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (49)

Figure 25. ANOVA evaluation of Seeds.

Figure 25. ANOVA evaluation of Seeds.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (50)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (51)

Figure 26. ANOVA evaluation of Wine.

Figure 26. ANOVA evaluation of Wine.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (52)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (53)

Figure 27. ANOVA evaluation of Iris.

Figure 27. ANOVA evaluation of Iris.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (54)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (55)

Figure 28. ANOVA evaluation of Statlog.

Figure 28. ANOVA evaluation of Statlog.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (56)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (57)

Figure 29. ANOVA evaluation of XOR.

Figure 29. ANOVA evaluation of XOR.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (58)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (59)

Figure 30. ANOVA evaluation of Balloon.

Figure 30. ANOVA evaluation of Balloon.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (60)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (61)

Figure 31. ANOVA evaluation of Cancer.

Figure 31. ANOVA evaluation of Cancer.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (62)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (63)

Figure 32. ANOVA evaluation of Diabetes.

Figure 32. ANOVA evaluation of Diabetes.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (64)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (65)

Figure 33. ANOVA evaluation of Gene.

Figure 33. ANOVA evaluation of Gene.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (66)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (67)

Figure 34. ANOVA evaluation of Parkinson.

Figure 34. ANOVA evaluation of Parkinson.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (68)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (69)

Figure 35. ANOVA evaluation of Splice.

Figure 35. ANOVA evaluation of Splice.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (70)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (71)

Figure 36. ANOVA evaluation of WDBC.

Figure 36. ANOVA evaluation of WDBC.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (72)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (73)

Figure 37. ANOVA evaluation of Zoo.

Figure 37. ANOVA evaluation of Zoo.

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (74)

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (75)

Table 1. The descriptions of the samples.

Table 1. The descriptions of the samples.

SamplesAttributeClassTrainingTestingInputHiddenOutput
Blood42493255492
Scale43412213493
Survival32202104372
Liver622271186132
Seeds73139717153
Wine1331176113273
Iris439951493
Statlog1321789213272
XOR3244372
Balloon421010492
Cancer925991009192
Diabetes825072618172
Gene5727036571152
Parkinson2221296622452
Splice602660340601212
WDBC30239416530612
Zoo167673416337

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (76)

Table 2. Configured parameters of each technique.

Table 2. Configured parameters of each technique.

TechniqueConfigured ParameterValue
ETTAORandom variable r 0 [0, 1]
Random variable θ [ 0 , π ]
Random variable r 1 [0, 1]
Random variable r 2 [0, 1]
Random variable r 3 [0, 1]
Random variable r 4 [0, 1]
Scaling factor F 0.7
EPSARandom variable l [0, 1]
Random variable k [0, 1]
Reflectivity α 1
Expansion coefficient γ 1.5
Compression coefficient β 0.5
SABORandom variable r [0, 1]
SAODegree-day factor D D F [0.35, 0.6]
Random variable θ [−1, 1]
EWWPARandom variable r 1 [0, 2]
Random variable r 2 [0, 1]
Random variable r 3 [0, 2]
Exponential number K [0, 1]
Random variable F [−5, 5]
Random variable C [−5, 5]
YDSERandom variable r a n d 1 [−1, 1]
Random variable r a n d 2 [−1, 1]
Constant equal δ 0.38
Random variable r 1 [0, 1]
Random variable r 2 [0, 1]
Random variable r 3 [0, 1]
Random variable H [−1, 1]
Random variable g [−1, 1]
Constant value C 10 20
TSARandom variable c 1 [0, 1]
Random variable c 2 [0, 1]
Random variable c 3 [0, 1]
Initial speed P min 1
Subordinate velocity P max 4
Random variable r a n d [0, 1]
ETSARandom variable c 1 [0, 1]
Random variable c 2 [0, 1]
Random variable c 3 [0, 1]
Initial speed P min 1
Subordinate velocity P max 4
Random variable r a n d [0, 1]
Scaling factor F 0.7

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (77)

Table 3. Experimental results of extensive samples.

Table 3. Experimental results of extensive samples.

DatasetsResultETTAOEPSASABOSAOEWWPAYDSETSAETSA
BloodBest0.3168890.3172490.3106360.3366860.3206880.3238350.3193810.310442
Worst0.3470760.34980.3404830.3600530.361860.3630150.4185410.340643
Mean0.3299630.3278130.3248350.3485250.3382870.3421140.3596170.323607
Std0.0087840.0078760.0072380.0067770.0139910.0110510.0390130.009500
Accuracy77.6575.8275.5975.1275.3975.5175.1078.82
Rank23476581
ScaleBest0.1824070.1898060.1722610.1925300.2459890.1796120.1737350.171205
Worst0.354830.4475780.3108320.4308920.5260960.4490290.6320890.282578
Mean0.2529720.2921120.2390070.3080020.347150.2627350.4270230.214141
Std0.0450640.0600250.0425640.0652620.063420.0771690.1202170.031758
Accuracy77.9175.4280.1274.5563.9983.2644.8487.79
Rank45367281
SurvivalBest0.3674730.3578480.3683780.3893320.376570.3718070.3978350.355298
Worst0.3948220.3951850.4259680.4197530.41370.4586420.4596380.404498
Mean0.3830530.3766360.3959450.4051530.4015430.4009750.4088770.382667
Std0.0075780.0102510.0150600.0091750.0092510.0208960.0193680.011602
Accuracy78.0878.1377.5578.1779.4376.4475.0080.77
Rank54632781
LiverBest0.4289640.4159280.4129470.4654860.4551950.4516560.4362050.424381
Worst0.4698740.4754340.4783780.4852100.4833580.4851160.4832950.462536
Mean0.4552580.4461920.4528270.4762930.4734010.4726460.4597430.445463
Std0.0120540.0155850.0144340.0048540.0087570.0093670.0128710.010707
Accuracy49.8752.3350.1746.6552.1246.3947.8861.02
Rank52473861
SeedsBest0.0852450.1668260.0585470.1007190.1231380.0719420.0719200.069260
Worst0.1776580.5996210.3595250.2446040.4671590.1870500.5589320.218142
Mean0.1237830.3358090.1105130.1838780.2850710.1287770.3125770.106494
Std0.0202890.1227210.0751210.0387720.091580.0299730.1279910.034834
Accuracy79.3760.0781.3475.3560.7079.7932.2591.55
Rank47256381
WineBest0.0681640.1903240.0311280.1709400.3197970.0940170.0269690.044282
Worst0.341880.3848620.2361980.4786320.646130.2991450.4754710.143103
Mean0.1555980.2919440.0897940.2861800.4628960.1722220.2317650.072509
Std0.0656420.0595930.0503650.0738760.0846830.0473260.1185200.025277
Accuracy80.6668.1984.0271.2368.8578.6946.9793.44
Rank37256481
IrisBest0.0586820.0606060.0385140.0909090.056180.0404040.2454480.040404
Worst0.2692590.4138740.2830070.4617860.4872020.4747470.5892130.268810
Mean0.1287950.206020.1062060.2282210.3288890.1176770.3577220.112052
Std0.0607840.0963330.0561310.1053340.1085980.0961220.1162410.069511
Accuracy82.8460.2972.0670.0057.9878.7355.3398.04
Rank26457381
StatlogBest0.2347840.2495370.1803380.2860140.2861660.2640450.2110780.216368
Worst0.2884950.3017540.2755620.4213480.3944530.3764040.3502710.273629
Mean0.2658860.2816970.2121370.3598340.3272890.3306090.2945220.235184
Std0.0145570.0158010.0227560.0336230.0314510.0339070.0486600.016115
Accuracy76.2573.4879.1364.8977.1774.0267.6384.78
Rank46283571
XORBest000 6.0 × 10 180 0000
Worst0.50.50.5000000.3334630.4246480.2500000.4166800.500000
Mean0.1967270.20.1713650.1123940.1681830.0873760.1747350.030541
Std0.2375890.1919430.2311670.1250220.1400490.1221680.1566240.113236
Accuracy31.2550.0026.2531.2550.0026.2525.5275
Rank43542561
BalloonBest00000000
Worst0.20.2879520.10000000.26127200.2400000
Mean0.010.0446070.00500000.10087200.0586670
Std0.0447210.0867950.02236100.08415700.0897980
Accuracy56.0049.507010054.0010082.00100
Rank4531 121
CancerBest0.0521010.0854350.0467450.0550920.0575270.0517530.0483680.043406
Worst0.2687770.2651220.2588090.1172590.2652550.0801340.3848360.061780
Mean0.0765070.1531480.0951720.0877260.1358370.0656090.1781330.052799
Std0.046480.0548710.0592430.0150370.077580.0088850.1015410.004563
Accuracy90.3569.0584.2586.8060.0090.6579.3599
Rank37548261
DiabetesBest0.3287460.3354160.3273750.3836350.3510050.4004170.3318440.331600
Worst0.3682710.3988710.3826110.4637780.4417330.4911240.4704930.368422
Mean0.3499530.366660.3514700.4243940.3980970.4598690.4006000.346061
Std0.0111860.018390.0161990.0207280.0272860.0242760.0292800.010700
Accuracy74.4675.0973.6061.5963.6470.5060.9279.69
Rank32476581
GeneBest0.1584310.2285710.5268050.2857140.3351180.2428570.2700410.053199
Worst0.3571430.3622850.8301300.4000000.8655350.3571430.3767660.300327
Mean0.2591370.2977350.7790260.3518330.3865440.3107140.3188610.157217
Std0.0657320.0373480.0666410.0312320.1141280.0277610.0327380.057520
Accuracy13.8913.8904.028.333.335.5622.22
Rank22753641
ParkinsonBest0.0818090.1550390.1142860.1240310.150240.0852710.0768040.048982
Worst0.3589670.279070.3309020.2325580.3299040.2248060.3130690.128458
Mean0.1585290.2256880.2476390.1730510.2046510.1314320.1463570.091221
Std0.0702830.0346110.0893620.0305170.0402590.0304300.0639480.021407
Accuracy64.3260.3056.8263.9462.9563.9459.9275.76
Rank25734361
SpliceBest0.3054030.4215450.8789600.4515150.46880.3984850.3719560.253672
Worst0.3897530.495440.9095690.6500000.8175320.6015150.9466330.356823
Mean0.3560720.4554260.8965450.5382830.5256680.4993940.7286020.321266
Std0.0260760.0169270.0072430.0523820.0960660.0476010.1822580.027700
Accuracy62.5050.599.8961.0637.9462.9941.4775
Rank35847261
WDBCBest0.0607530.1131780.0684870.1142130.1042150.0837560.0362400.042878
Worst0.1878820.3038160.5019790.2947860.3014240.1776650.3994870.083286
Mean0.098930.2201090.1544980.1725280.2120060.1173900.1440090.064770
Std0.028610.0454920.1218420.0387870.0546680.0236140.1213440.010727
Accuracy89.3683.9489.6786.6480.7987.7079.1895.15
Rank36257481
ZooBest0.1641790.3582090.1086850.2388060.4713580.1791040.0822740.029851
Worst0.4925370.7221980.3880600.5063550.7013710.4328361.2167250.194411
Mean0.2877030.5358680.1956150.3828160.586550.3007460.5721670.098595
Std0.0818060.0771190.0756810.0879230.0621650.0715430.3180210.047253
Accuracy54.1241.1863.6852.3541.1856.0357.9582.35
Rank57267431

An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (78)

Table 4. Results of the p-value Wilcoxon rank-sum test.

Table 4. Results of the p-value Wilcoxon rank-sum test.

DatasetsETTAOEPSASABOSAOEWWPAYDSETSA
Blood 2.23 × 10 2 9.62 × 10 6 4.47 × 10 5 1.43 × 10 7 9.21 × 10 4 3.29 × 10 5 1.61 × 10 4
Scale 4.32 × 10 3 1.81 × 10 5 3.85 × 10 2 1.60 × 10 5 1.43 × 10 7 2.07 × 10 2 1.38 × 10 6
Survival 6.55 × 10 3 1.08 × 10 2 7.11 × 10 3 3.50 × 10 6 2.04 × 10 5 1.23 × 10 3 5.87 × 10 6
Liver 1.23 × 10 2 7.35 × 10 5 4.67 × 10 2 6.80 × 10 8 2.56 × 10 7 4.54 × 10 7 9.21 × 10 4
Seeds 8.36 × 10 3 1.06 × 10 7 1.98 × 10 2 2.04 × 10 5 2.22 × 10 7 1.79 × 10 2 9.75 × 10 6
Wine 4.54 × 10 6 6.80 × 10 8 4.40 × 10 5 6.80 × 10 8 6.80 × 10 8 2.18 × 10 7 1.29 × 10 4
Iris 1.81 × 10 2 1.35 × 10 3 9.46 × 10 8 1.61 × 10 4 3.07 × 10 6 9.02 × 10 9 2.21 × 10 7
Statlog 5.87 × 10 6 3.42 × 10 7 1.79 × 10 4 6.80 × 10 8 6.80 × 10 8 9.10 × 10 8 1.22 × 10 3
XOR 2.74 × 10 2 2.45 × 10 3 7.64 × 10 4 2.78 × 10 6 3.55 × 10 5 2.16 × 10 2 5.65 × 10 6
Balloon 3.42 × 10 2 2.08 × 10 3 3.42 × 10 2 N / A 2.99 × 10 8 N / A 2.09 × 10 3
Cancer 2.36 × 10 6 6.80 × 10 8 1.60 × 10 5 2.96 × 10 7 1.06 × 10 7 9.67 × 10 6 4.68 × 10 5
Diabetes 3.23 × 10 2 4.16 × 10 4 3.94 × 10 3 6.80 × 10 8 2.22 × 10 7 6.78 × 10 8 1.05 × 10 6
Gene 5.25 × 10 5 4.54 × 10 7 6.61 × 10 8 8.89 × 10 8 6.80 × 10 8 2.40 × 10 7 1.92 × 10 7
Parkinson 9.28 × 10 5 6.80 × 10 8 1.66 × 10 7 1.22 × 10 7 6.80 × 10 8 2.00 × 10 5 6.87 × 10 4
Splice 4.16 × 10 4 6.80 × 10 8 6.80 × 10 8 6.80 × 10 8 6.80 × 10 8 6.79 × 10 8 6.80 × 10 8
WDBC 7.58 × 10 6 6.80 × 10 8 3.42 × 10 7 6.76 × 10 8 6.80 × 10 8 6.78 × 10 8 7.76 × 10 8
Zoo 9.10 × 10 8 6.75 × 10 8 1.41 × 10 5 6.75 × 10 8 6.79 × 10 8 9.68 × 10 8 1.20 × 10 6

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.


© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
An Enhanced Tunicate Swarm Algorithm with Symmetric Cooperative Swarms for Training Feedforward Neural Networks (2024)

References

Top Articles
Latest Posts
Article information

Author: Terrell Hackett

Last Updated:

Views: 6838

Rating: 4.1 / 5 (52 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Terrell Hackett

Birthday: 1992-03-17

Address: Suite 453 459 Gibson Squares, East Adriane, AK 71925-5692

Phone: +21811810803470

Job: Chief Representative

Hobby: Board games, Rock climbing, Ghost hunting, Origami, Kabaddi, Mushroom hunting, Gaming

Introduction: My name is Terrell Hackett, I am a gleaming, brainy, courageous, helpful, healthy, cooperative, graceful person who loves writing and wants to share my knowledge and understanding with you.