A fuzzy logic controller applied to a diversity-based multi-objective evolutionary algorithm for single-objective optimisation

In recent years, Multi-Objective Evolutionary Algorithms (moeas) that consider diversity as an objective have been used to tackle single-objective optimisation problems. The ability to deal with premature convergence has been greatly improved with these schemes. However, they usually increase the number of free parameters that need to be tuned. To improve results and avoid the tedious hand-tuning of algorithms, the use of automated parameter control approaches that are able to adapt parameter values during the course of an evolutionary run are becoming more common in the field of Evolutionary Computation (ec). This research focuses on the application of parameter control approaches to diversity-based moeas. Two external parameter control methods are investigated; a novel method based on Fuzzy Logic and a recently proposed Hyper-heuristic. These are compared to an internal control method that uses self-adaptation. An extensive comparison of the three methods is carried out using a set of single-objective benchmark problems of diverse complexity. Analyses include comparisons to a wide range of schemes with fixed parameters and to a single-objective approach. The results show that the fuzzy logic and hyper-heuristic methods are able to find similar or better solutions than the fixed parameter methods for a significant number of problems, with considerable savings in computational resources and time, whereas the self-adaptive strategy provides little benefit. Finally, we also demonstrate that the controlled diversity-based moea outperforms the single-objective scheme in most cases, thus showing the benefits of solving single-objective problems through diversity-based multi-objective schemes.


Introduction
Many real-world problems require the application of optimisation strategies.Several exact approaches have been designed to deal with optimisation problems.However, exact methods are generally not affordable for many real-world applications, meaning that a wide variety of approximation algorithms have been developed in an effort to obtain good-quality solutions in a limited amount of time.Metaheuristics are a family of approximation techniques that have become popular for solving optimisation problems (Glover and Kochenberger 2003).They are high-level strategies that guide a set of heuristics in search of an optimum.
Among them, Evolutionary Algorithms (eas) (Eiben and Smith 2003) are one of the most popular strategies.These draw their inspiration from biological evolution and are population-based algorithms.
eas have shown great promise for calculating solutions to large and difficult optimisation problems.However, in some problems, eas exhibit a tendency to converge towards local optima, with the likelihood of this occurrence depending on the shape of the fitness landscape (Caamaño et al. 2010).Several methods have been designed with the aim of dealing with local optima stagnation.The reader is referred to Črepinšek et al. (2013) for an extensive survey of diversity preservation mechanisms.One of the strategies that has acquired some popularity in recent years relies on using multi-objective approaches to solve single-objective optimisation problems (Segura et al. 2013a).Several guidelines for solving single-objective optimisation problems using multiobjective methods have been proposed in the last decades, with diversity-based moeas being one of the most promising schemes (Abbass and Deb 2003).In this type of schemes, a set of objectives is calculated for each individual.The first one is the original objective associated with the problem being solved.The remaining objectives-most proposals consider only one additional objective-are measures of the diversity.Some definitions of the auxiliary objectives require specifying additional parameters (Segura et al. 2013b).These parameters have to be tuned in order to improve the solutions obtained.However, Segura et al. (2013b) have pointed out that suitable values for these parameters could depend on the problem to be solved and/or on the stage of the optimisation procedure.
In order to define the configuration for an ea, several components and/or parameters, such as the survivor selection mechanism and the variation and parent selection operators, must be specified.In general, the performance of an ea and consequently, the quality of the solutions obtained, are highly dependent on these components and parameters.As a result, it is essential that the parameters of an ea be properly set.This task, however, remains one of the persistent grand challenges in Evolutionary Computation (ec) (Eiben and Smit 2011).
Parameter setting strategies are commonly divided into two categories: parameter tuning and parameter control.In parameter tuning, the objective is to identify the best set of values for the parameters of a given ea, with the ea then being executed using these values, which remain fixed for the complete run.In contrast, the aim of parameter control is to design control strategies that select the most suitable values for the parameters at each stage of the search process while the algorithm is executed.In single-objective optimisation, it has been empirically and theoretically demonstrated that different values for the parameters might be optimal at different stages of the optimisation process (Srinivas and Patnaik 1994;Bäck 1992).In these cases, it seems more appropriate to apply parameter control strategies that enable the parameter values to adapt or change during the course of an ea run.Therefore, it is natural to apply parameter control methods to moeas, and particularly, to diversity-based moeas.
In this paper, we consider novel parameter control strategies that can be combined with diversity-based moeas and apply them to a set of well-known single-objective benchmark problems.The parameter control strategies are in charge of controlling the additional parameter added in the auxiliary objective definition.We consider external and internal methods of parameter control.In relation to the former, we develop a novel method of parameter control based on Fuzzy Logic and compare it to a Hyper-heuristic control method proposed by the authors in Segura et al. (2010).The external control algorithms are also compared to a number of variations on a method of internal control in which the parameter to be adapted is incorporated into the chromosome used to specify the problem, resulting in self-adaptation through evolution.
The aim of this research is not to design a complete state of the art ea for continuous optimisation or to compare our adaptive diversity-based moea to other highly efficient eas or meta-heuristics specifically designed for continuous optimisation, which usually incorporate mechanisms to improve the solutions found.The objective is, on the one hand, to compare our proposed control method based on flcs against a well-established control scheme based on hyper-heuristics and against self-adaptation, and on the other hand, to show the benefits of parameter control versus parameter tuning.As a result, the contributions of this paper are as follows: -A novel external parameter control method based on a fuzzy logic controller.-Novel self-adaptive schemes to control the parameters of a diversity-based moea.-First application of parameter control techniques based on fuzzy logic controllers and self-adaptation to a diversitybased moea in which the parameters of the auxiliary objective function are adapted.-An extensive comparison of external versus internal methods of parameter control for diversity-based moeas.-A comparison of parameter control methods to fixed parameters that highlights the benefit of parameter control as opposed to parameter tuning.-A comparison between the adaptive diversity-based moea and a single-objective ea that shows the advantages of using diversity-based multi-objective approaches to solve single-objective problems.
The paper is organised as follows.In Sect.2, an overview of the state of the art in parameter control in eas is given.Sec-tion 3 provides some background on fuzzy logic controllers and hyper-heuristics, which we propose as parameter control methods.Section 4 exposes the diversity-based moea applied herein and covers some background on related schemes.The proposed parameter control methods are explained in Sect. 5 followed by a detailed analysis of the experimental results in Sect.6.Finally, the conclusions and future lines of work are given in Sect.7.

State of the art of parameter control in evolutionary algorithms
Finding the most suitable configuration of an ea is one of the most challenging tasks in the field of Evolutionary Computation (ec) (Eiben and Smith 2003).In order to completely define an instance of an ea, two types of information are required (Bartz-Beielstein et al. 2010;Maturana et al. 2009): -Symbolic-also referred to as qualitative, categoric or structure parameters-such as crossover, mutation and selection operators.-Numeric-also referred to as quantitative or behavioural parameters-such as the population size, the crossover and mutation rates.
For both kinds of parameters, the different elements of the domain are known as parameter values, and a parameter is instantiated by assigning it a value.The main difference between both types of parameters lies in the size of their respective domains.Symbolic parameters, such as the crossover operator, have a finite domain in which order is not established and a distance metric is not defined.In contrast, numeric parameters, such as the mutation rate, have an infinite domain in which a distance metric and an order can be defined for the values.Thus, optimisation and search methods can readily be used to look for the appropriate values of the numeric parameters of an ea.However, in the case of symbolic parameters, as noted above, distance metrics cannot be applied between two values, and therefore, optimisation schemes are not able to profit from the definition of these types of metrics for setting such parameters.In this paper, we focus on control methods for numeric parameters.
The goal of parameter control is to design a control strategy that selects the most suitable parameter values to use at every stage of the search process.The ideas of parameter control were first incorporated in early research into eas (Davis 1989;Rechenberg 1973).Recent research, however, has seen a marked increase in proposals for methods to achieve parameter control in eas (Lobo et al. 2007).In fact, control methods have been successfully applied to a wide range of eas, such as Evolution Strategies (es) (Kramer 2010), Genetic Algorithms (gas) (Fialho 2010), and Differential Evolution (de) (Qin et al. 2009), among others.In order to classify parameter control approaches, several taxonomies have been proposed.One of the most popular classifications (Eiben et al. 2007) groups the mechanisms according to various criteria.According to the manner in which parameter values are changed, control strategies can be classified as: - A change can affect a gene, an individual, the whole population, another component, or even the evaluation function.Thus, another classification can be carried out that takes into consideration the scope or level affected by the change.
Finally, we should note that a wide variety of approaches can be found in the literature, though most research on parameter control is focused on the parameters of a 'standard' EA, i.e. the variation operators, like the mutation or the crossover operators, the population size, or combinations of all three (Eiben et al. 2007;Bäck et al. 2000).In this paper, we describe the application of control techniques to parameters that adapt the behaviour of the auxiliary objective function in a diversity-based moea.

Techniques for parameter control: background
In this section, we provide background information on two techniques that can be used to implement adaptive parameter control, before describing our novel implementation of both in a later section.

Fuzzy logic controllers
In recent years, our knowledge of the performance of eas has significantly increased thanks to the large number of empirical analyses conducted over a wide range of applications in different areas.It would be desirable to profit from this human knowledge by encapsulating it within an algorithm to automate the task of improving the behaviour and the performance of eas.However, this sort of knowledge is usually incomplete, imprecise, and/or it is not well organised.Consequently, the application of fuzzy logic-based methods would seem to offer a promising approach for dealing with this kind of knowledge.
One application of fuzzy logic is the design of Fuzzy Logic Controllers (flcs).flcs can be used to define control approaches in which the incorporation of human knowledge is performed intuitively.As was stated by Herrera and Lozano (2003), an flc consists of the knowledge base, the fuzzy inference engine, and the fuzzification and defuzzification interfaces.The knowledge base is composed of two different parts, a data base, which includes the definitions of the membership functions of the linguistic terms for each input and output variable, and a rule base constituted by the collection of fuzzy control rules representing human knowledge.
The main benefit of using flcs to adapt the parameters of an ea is that the possible values that can be assigned to a certain parameter are infinite, in contrast to other techniques that can only use a finite number of values.However, the main drawback is that flcs cannot be directly applied to control the symbolic parameters of an ea.
A considerable body of research on flcs and eas already exists (Fazzolari et al. 2013;Herrera and Lozano 2003).For example, different eas have been used to optimise the design of flcs for different applications (Fazzolari et al. 2013;Rui et al. 2010;Lau et al. 2009;Herrera 2008).In this paper, however, we study the reverse of this type of application and focus on the design of flcs that adapt the parameters of an ea, thus providing an adaptive control technique that utilises feedback from the search process.Several methods have been proposed for controlling the parameters of an ea by using an flc (Herrera and Lozano 2003).The main idea is to use an flc to calculate new parameter values by taking into consideration some combination of performance metrics and current parameter values as the input to the controllers.Some of the best known variants of eas that use flcs in order to adapt their parameters include gas (Varnamkhasti and Lee 2012;Yao et al. 2012;Liu and Liu 2011;Im and Lee 2008;Herrera and Lozano 2001).Some of these schemes are described in what follows.
In Varnamkhasti and Lee (2012), an flc is used to control several parameters of a ga, which is employed to solve multi-dimensional knapsack problems.Particularly, the aim is to avoid premature convergence by controlling the population diversity through the adjustment of the crossover and mutation operators, as well as their rates of application.In Yao et al. (2012), an flc is proposed to control the crossover and mutation rates of a ga in order to mitigate the aforementioned convergence issue.Another scheme based on the usage of flcs was introduced by Im and Lee (2008), where the parameters of the mutation, crossover, and parent selection operators of a ga are adapted.Finally, in Herrera and Lozano (2001), a separate ga is used as an automatic learning mechanism to generate the rule bases belonging to an flc, which is responsible for adapting the genetic operators of another ga.Thus, both gas exert an influence on the other, adapting the genetic operators by coevolution.
Some novel variants of eas, such as de, have also been combined with flcs to control their internal parameters.For instance, Liu and Lampinen (2005) proposed two flcs to adapt the mutation scale factor and the crossover rate of a de approach.
The feature common to most of the research described in the literature is that the flcs are designed to adapt the parameters of the mutation or crossover operators, the population size, or combinations of all three (Herrera and Lozano 2003).In addition, flcs are usually tailor-made for a specific ea and/or parameters, and they only make use of a unique rule base.The main novelties of the flc proposed herein are therefore the following: -The approach is general in that it can be used to adapt different numeric parameters of different eas.
-The system proposed contains multiple rule bases.A rule base is enabled at a certain time depending on historical information extracted from the optimisation process.This historical data are used to guide the adjustment of the parameter being considered.-It is the first time that an flc is used to control the parameters of the auxiliary objective in a diversity-based moea.

Hyper-heuristics
Hyper-heuristics can be defined as search methods or learning mechanisms for selecting or generating heuristics to solve computational search problems (Burke et al. 2010).
Hyper-heuristics based on heuristic selection try to identify and select the most promising heuristics or meta-heuristicsfrom a set of candidates-to solve a particular instance of a problem.Alternatively, hyper-heuristics based on heuristic generation aim to generate heuristics automatically in order to solve a particular instance of a problem.In addition, hyperheuristics can be further classified as online or offline, in which the former select or generate heuristics while solving an instance of a problem, and the latter typically learn a mapping between characteristics of a problem during a training phase that can then be applied to new instances.In this paper, we consider only online and selection hyper-heuristics.Thus, a hyper-heuristic can be viewed as a method that iteratively chooses between a set of candidate low-level heuristics or meta-heuristics in order to solve an optimisation problem (Burke et al. 2003).The hyper-heuristic learns to carry out choices while solving the optimisation problem at hand.Hyper-heuristics operate at a higher level of abstraction than traditional heuristics because they have no knowledge of the problem domain.The motivation behind the approach is that, ideally, once a hyper-heuristic is designed, several optimisation problems and/or instances of a problem might be addressed by simply replacing the set of low-level heuristics or meta-heuristics.As a result, the aim of using a hyperheuristic is to raise the level of generality at which the majority of current heuristic approaches operate (Burke et al. 2003).
Hyper-heuristics are highly correlated to parameter control approaches (Smit and Eiben 2009).For instance, the candidate low-level approaches might represent different configurations of the same meta-heuristic with variations in the parameters being controlled.The hyper-heuristic would then select the configuration with the most appropriate set of parameters at each point in the search.In fact, hyper-heuristics can be further classified as adaptive parameter control techniques if they receive some kind of feedback from the search process.
Hyper-heuristics are independent of the methods adapted, and therefore, they can be designed to control a wide range of approaches.In those cases where the best configuration, in terms of performance, of the same meta-heuristic varies depending on the current stage of the optimisation process, the hyper-heuristics could be used to select the most suitable configuration for each stage.Thus, it seems reasonable to expect the results obtained by the hyper-heuristic to be better than those obtained by any of the candidate low-level configurations executed independently.Furthermore, the use of a hyper-heuristic would permit low-level configurations to have variations in both their numeric and symbolic parameters, thus providing a straightforward mechanism for symbolic parameter control.However, the main drawback of the hyper-heuristic approach is the need to specify the set of candidate low-level configurations.Moreover, since the size of the set of candidate low-level approaches is generally fixed and finite, in the case of controlling numeric parameters, the number of possible values that can be assigned to the numeric parameters is therefore also finite.1 Despite this, hyper-heuristics have successfully been applied as adaptive parameter control techniques, both to benchmark problems (Ren et al. 2012) and to real-world applications (Segura et al. 2013c).

Diversity-based multi-objective evolutionary algorithms
Multi-objective methods have been proposed with the aim of optimising several objective functions simultaneously.Moreover, using multi-objective methods to ensure proper diver-sity when solving single-objective problems is a promising approach.As a result, the use of diversity metrics to define additional objectives might provide a proper balance between exploration and exploitation.For this reason, several studies have analysed the use of moeas to promote diversity maintenance in single-objective optimisation.As previously mentioned, these schemes are based on defining a new set of objectives that provide measures of the diversity.Several options have been proposed to define the auxiliary objectives (Greiner et al. 2007;Bui et al. 2005;Toffolo and Benini 2003).In fact, Segura et al. (2013a) gives a taxonomy to classify the different proposals.
In this work, we operate with genotypic measures that consider the values of the genes in order to define the auxiliary objectives.One of the most popular auxiliary objectives was proposed by Toffolo and Benini (2003).Specifically, it is calculated as the mean Euclidean distance in the genotypic space to the remaining individuals in the population and it is called the Average Distance to all Individuals (adi).Based on these ideas, two new auxiliary objectives were defined by Bui et al. (2005).These are the Distance to the Closest Neighbour (dcn) and the Distance to the Best Individual (dbi) functions.Note that all of the above auxiliary objectives have to be maximised.
An extension of the dcn scheme, called dcn-thr, was proposed in Segura et al. (2013b).It attempts to limit the survival of very low-quality individuals by using a threshold ratio th ∈ [0, 1], which has to be specified by the user.A threshold v is used to penalise individuals that have low quality with respect to the original objective function by assigning an auxiliary objective value equal to 0 to said individuals.If bestObjectiveValue is the original objective value of the best individual in the population, and shift is a value that ensures that bestObjectiveValue − shift ≥ 0 throughout the entire optimisation procedure, then the threshold value v-for a minimisation problem-can be defined as: After v is calculated, all individuals whose original objective value is higher than v-for a minimisation problemhave the value of their auxiliary objective assigned to 0. For the remaining individuals, their auxiliary objective is calculated as the dcn, i.e. the Euclidean distance in the genotypic space to the closest individual.Consequently, individuals that are not able to achieve the specified threshold are penalised.In the special case where th = 0, Eq. (1) does not hold, and therefore, individuals are never penalised.Thus, dcn-thr with th = 0 behaves like the dcn approach.
The dcn-thr approach was selected based on previous research by the authors described in Segura et al. (2013b), in which it was shown that the incorporation of the thresh-old ratio th provided significant benefits with respect to those schemes that did not make use of it.However, the main drawback of the approach is that the most appropriate value for this parameter depended on the problem being solved.In addition, it was further suggested by Segura et al. (2013b) that the best value of th is also dependent on the stage of the optimisation process, and consequently th should be varied over the course of a run.Therefore, the application of parameter control techniques to automatically adapt this parameter ought to significantly improve both the behaviour and the robustness of the diversity-based moea.Let us consider this idea in detail.
In general, any moea can be used in combination with the dcn-thr auxiliary objective.There exist a large number of moeas described in the literature which have shown good performance (Zhou et al. 2011).The overall results obtained might be greatly improved upon by carefully analysing the performance of different moeas in combination with the parameter control strategies described herein.However, such a study is beyond the scope of this research.Instead, and considering the popularity of the Non-dominated Sorting Genetic Algorithm II (nsga-ii) (Deb et al. 2002), we have decided to apply parameter control methods to this algorithm.For a deeper insight into the behaviour of the nsga-ii when combined with the dcn-thr scheme, the reader is referred to Segura et al. (2013b).
In order to complete the definition of the diversity-based moea, some traditional components are used.Particularly, the parent selection mechanism is the well-known Binary Tournament (Eiben and Smith 2003), while the variation operators are the Uniform Mutation (um) (Eiben and Smith 2003) and the Simulated Binary Crossover (sbx) (Deb and Agrawal 1995).Mutation and crossover operators are applied with probabilities p m and p c , respectively.

Parameter control methods
In this section, we describe in detail the three parameter control approaches that are evaluated in later sections.The first two approaches-fuzzy logic controllers and hyperheuristics-provide an external control mechanism for altering parameter th during the course of a run.The final approach uses self-adaptation by incorporating the parameter into the chromosome that defines the problem solution.

Fuzzy logic controllers
This section describes a novel flc introduced by the authors to control the parameter th in the dcn-thr approach.Its main novelty lies in the incorporation of a set of different rule bases that are enabled depending on historical information extracted from the optimisation process.This historical data Firstly, the initialisation and learning stages-lines 1-4are carried out.During the initialisation stage, different sample values are generated for the parameter th, distributing them uniformly within the range [0, 1].In order to generate them, a value Δ is considered as the difference between two consecutive samples.Although Δ might be considered as a parameter of the flc, it is assigned a fixed value regardless of the optimisation problem.Then, in the learning stage, the diversity-based moea explained in Sect. 4 is executed for numGen generations for each of the generated samples in order to gather sufficient information.Once these two stages are complete, the flc infers the change to be applied to the parameter th-lines 6-11-taking into account the values of the input variables and the rule base selected.Then, the diversity-based moea is executed for numGen generationsline 12-with the new value of th.Finally, this process is repeated until the global stopping criterion of the diversitybased moea is satisfied.
Note that Mamdani's fuzzy inference method is used for the fuzzy inference process-lines 9-11.In addition, the fuzzy logic operator and2 uses the minimum t-norm, the implication method uses the minimum t-norm, the aggregation method applies the maximum s-norm and the centroid algorithm is applied as the defuzzification method.All of these components were selected because they are usually implemented together with Mamdani flcs.It is important to note that zero-order Takagi-Sugeno-Kang (tsk) flcs-where the linguistic terms of the output variables are described using a zero-order (constant) function instead of membership functions-were also implemented.These flcs use the weighted average as the defuzzification method.Furthermore, they do not require the use of an aggregation method.The remaining components of the fuzzy inference process were the same as those applied in the Mamdani flcs exposed herein.The differences between Mamdani and tsk flcs, however, were not statistically significant.Consequently, only Mamdani flcs are considered in this paper.
The input variables of the flc are the following: imp.Calculated as the improvement in the original objective value of the best individual achieved by the diversitybased moea-line 12 of Algorithm 1-during the latest numGen generations.This input variable is normalised in order to delimit it to the range [0, 1].var.A measure of the diversity of the population.The higher its value the more diverse the population.The calculation of this input variable with no normalisation is shown in Eq. ( 2).The values of the decision variable i of individuals j and k are given by x j [i] and x k [i].The total number of decision variables is represented by D and N is the population size.The value of var * is normalised to enclose the variable var in the range [0, 1]: th-in.Defined as the current value of parameter th within the range [0, 1].best-th-in.Defined as that value of parameter th that has attained the maximum improvement in the original objective value considering the last k values of the parameter th inferred by the flc.Its value is also in the range [0, 1].
Two different versions of the flc are applied.The first one is named fuzzy-a and makes usage of the input variables imp, var, and th-in.The second one utilises the input variables imp, th-in, and best-th-in, and it is called fuzzyb.For both variants of the flc, only one output variable is defined, called th-out, which represents the increment or decrement to be applied to the parameter th in order to change its value.The membership functions for both the input and output variables are shown in Fig. 1.Due to the computational simplicity and efficiency advantages they offer, triangularshaped membership functions were selected for the input and output variables.The linguistic terms represented by the memberships functions-from left to right in Fig. 1-are as follows: -Input variables imp, var, and best-th-in: low (l), med (m), and high (h).-Input variable th-in: low (l), low-med-b (lmb), lowmed-a (lma), med (m), med-high-a (mha), med-high-b (mhb), and high (h).-Output variable th-out: neg-giant (ng), neg-huge (nu), neg-high (nh), neg-med (nm), neg-low (nl), zero (z), pos-low (pl), pos-med (pm), pos-high (ph), pos-huge (pu), and pos-giant (pg).
For each flc, several rule bases are defined.The reason for the use of several rule bases is that different fuzzy rules will be applicable depending on the behaviour exhibited during the previous execution.For instance, if the best results were historically obtained by high values of the parameter th, the selected rule base should promote the use of said high values.Every rule base is composed of different ifthen fuzzy rules.The left-hand side of Table 1 shows one of the rule bases defined for the fuzzy-a approach, while the right-hand side shows another one for the fuzzy-b scheme.Only the logic operator and is used in the antecedents of said fuzzy rules.In general, every fuzzy rule considers three input variables and one output variable.In cases where a '-' is shown, the corresponding fuzzy rule has no dependency on the corresponding variable.The remaining rule bases are not shown due to space constraints, but are similar to those shown herein.
In order to select the most suitable set of rules, we propose a novel scoring function.It uses a weighted average that considers historical data on both the improvement in the original objective value and on the degrees of membership of parameter th to each term defined for the input variable th-in.
Equation (3) assigns a score to each linguistic term i ∈ [0, numTerms−1].After every execution of the optimisation scheme-line 12 of Algorithm 1-the improvement achieved is entered into those vectors γ [i] for which the degree of membership of parameter th to the linguistic term i is different from zero.In the same way, said degree of membership is entered into vector δ [i].Hence, the term d denotes the number of items in the vectors δ[i] and γ [i].Additionally, the value of k is defined as the amount of historical knowledge considered by the flc, i.e. for each linguistic term, information on the last k improvements achieved is considered.Specifically, for each linguistic term, the equation represents a weighted average of its improvements, where greater importance is given to the last executions in which values of the controlled parameter have a high degree of membership to the corresponding linguistic term.Thus, the linguistic term i will be assigned a higher score if the values of parameter th have larger degrees of membership to said linguistic term, and if, at the same time, the values of parameter th are able to achieve better improvements in the original objective.
Note that if numTerms linguistic terms are defined for the input variable th-in, numTerms rule bases have to be implemented in order for the flc to work with the proposed scoring function.Figure 1 shows that seven linguistic terms are defined for the input variable th-in, meaning seven different rule bases are implemented.We tested different numbers of fuzzy rule bases and found that the higher the number of rule bases, the smoother the variations in the parameter th inferred by the flc, and thus the steadier the flc.However, when considering more than seven fuzzy rule bases, the performance started to degrade somewhat, as it also did with a lower number of fuzzy rule bases.Thus, we opted for seven rule bases, as this yielded the best performance for the flc.This fact also justifies the usage of seven linguistic terms for the input variable th-in, instead of using three linguistic terms as in the case of the remaining input variables.For the remaining input variables, three linguistic terms are used so as to maintain the rule bases as simple as possible.Finally, we should note that the different fuzzy rule sets were obtained using expert knowledge.
Once the scores are calculated, the linguistic term with the maximum score is selected.This means that those values of parameter th with a large enough degree of membership to said linguistic term should yield better performance than other values.Therefore, if the linguistic term i is selected, rule base i is enabled.This selected rule base is responsible for adapting the value of parameter th so that it approaches the values represented by term i.For instance, assume that the current value of the parameter th is 0.99 and the most suitable rule base-considering the scoring function-is the one represented by the linguistic term low of the input variable th-in.This means that historically low values of the parameter th have been able to provide good improvements to the original objective value.Thus, the rule base to be applied in this case is precisely the one shown in the left-hand side of Table 1, considering the approach fuzzy-a.Taking into account a fuzzy set for the variable imp, which has a large degree of membership to the term low, since th-in-with value 0.99-is represented by a fuzzy set with a large degree of membership to the term high, the output fuzzy set-the one corresponding to the output variable th-out-will have a large degree of membership to the linguistic term neggiant (ng).Consequently, the value of the parameter th will be considerably decreased so that it will tend towards lower values.

Hyper-heuristics
An extension of the hyper-heuristic approach to parameter control first described by Vink and Izzo (2007) is implemented in order to control the parameter th in the dcn-thr approach.This hyper-heuristic has been successfully applied in previous papers (Segura et al. 2013b;Segura 2012) and is based on using a scoring strategy and a selection strategy for selecting the most appropriate low-level configuration of the approach to be executed.A candidate low-level configuration in this case refers to an instance of the diversity-based moea described in Sect. 4 with a particular setting for the variable th of the auxiliary objective dcn-thr (all other parameters of the algorithm remaining constant).Once a strategy is selected, it is executed until a local stopping criterion is achieved.Afterwards, another low-level configuration is selected and executed.The final population of the last lowlevel configuration used becomes the initial population of the new low-level configuration.This process continues until a global stopping criterion is satisfied.The low-level configuration that must be executed is selected as follows.
First, the scoring strategy assigns a score to each lowlevel configuration.This score estimates the improvement that each low-level configuration can achieve starting from the currently obtained set of solutions.In order to calculate this estimate, the previous improvements on the original objective value achieved by each configuration are used.The improvement (γ ) is defined as the difference, in terms of the original objective value, between the best achieved individual and the best initial individual.Assuming a configuration conf which has been executed j times, the score is calculated as a weighted average of its latest k improvements (Eq.4).
In Eq. ( 4), γ [conf][ j − i] represents the improvement achieved by configuration conf in execution number j − i. Depending on the value of k, the adaptation level of the hyper-heuristic, i.e. the total amount of historical knowledge that the hyper-heuristic considers in order to perform its decisions, can be varied.The weighted average assigns a greater importance to the latest executions.
The score s(conf) is used to calculate a probability of selecting a particular low-level configuration.However, the stochastic behaviour of the low-level meta-heuristics involved may lead to variations in the results they obtain.As a result, the probability calculation also enables a fraction of selections based on a random scheme.This is implemented as follows.Specifically, the hyper-heuristic can be tuned by means of a parameter β, which represents the minimum selection probability that should be assigned to a low-level configuration.If n h is the number of low-level configurations involved, a random selection following a uniform distribution is performed in β •n h percentage of the cases.Therefore, the probability of selecting each configuration conf is defined as shown in Eq. ( 5).
Two different schemes based on this hyper-heuristic are applied in this paper.
-The first one is a probabilistic version-hh-prob-in which the selection probability is proportional to the score s(conf) (Eq.5).-The second one is an elitist version-hh-eli-which always selects the low-level configuration with the max-Fig. 2 Chromosome of the self-adaptive approaches imum score s(conf), in addition to the minimum random selections performed for each configuration.

Self-adaptation
In order to enable the parameter th to undergo self-adaptation, it is encoded within the chromosome, where it is subjected to mutation and crossover operators that change its value during the optimisation process.This relies on the premise that better values of the parameter th will produce better individuals, which in turn will have more opportunities to survive, and consequently propagate these improved parameter values.This is "the idea of the evolution of evolution" (Eiben et al. 2007).In the case of self-adaptive approaches to parameter control, the selection and variation operators of the ea are responsible for changes in the parameter values, i.e. the updating mechanism that adapts the parameters is implicit.This differentiates the method from the previously described flcs and hyper-heuristics in which the adaptation mechanism is external to the ea used.
Figure 2 shows the chromosome of an individual that considers the self-adaptation of the parameter th.For an individual representing a solution to a problem with D decision variables, the values x[i], i ∈ [0, D − 1] represent these decision variables.Three novel versions of the self-adaptive approach are proposed herein: self-a.The value of the parameter th of the best individual in the population-the one with the lowest original objective value-is applied at each generation to calculate the auxiliary objective value of all individuals.self-b.The mean value of the parameter th considering all individuals in the population is used in each generation to calculate the auxiliary objective value of every individual.self-c.The corresponding encoded value of the parameter th is applied to each individual in each generation in order to calculate its own auxiliary objective value.
In regard to the taxonomy discussed in Sect.2, which considers the scope affected by the change, self-a and self-b act at the population level, while self-c acts at an individual level.Thus, the encoding of the parameter th into the chromosome can be interpreted in different ways, supplying different algorithm variants in which the scope of the parameter varies.

Experimental evaluation
In this section, the experiments conducted with the diversitybased moea and the parameter control approaches presented in Sects.4 and 5 are described.

Experimental method
Both the diversity-based moea and the parameter control approaches were implemented using metco (León et al. 2009) (Meta-heuristic-based Extensible Tool for Cooperative Optimisation).Tests were run on a debian gnu/linux computer with four amd ® opteron™ (model 6164 he) at 1.7 ghz and 64 gb ram.The compiler was gcc 4.6.3,while the flcs were implemented using the fuzzylite 3.1 library (Rada-Vilela 2013).As all experiments used stochastic algorithms, each execution was repeated 32 times.Comparisons were performed by applying the following statistical analysis.First, a Shapiro-Wilk test was conducted in order to check whether the values of the results followed a normal (Gaussian) distribution or not.If so, the Levene test checked for the homogeneity of the variances.If samples had equal variance, an anova test was done.Otherwise, a Welch test was performed.For non-Gaussian distributions, the nonparametric Kruskal-Wallis test was used to compare the medians of the algorithms.A significance level of 5 % was considered.

Problem set
Experiments were carried out using a set of 19 singleobjective benchmark problems-f1-f19-proposed by Lozano et al. (2011).The set defines a number of scalable continuous optimisation problems that combine different properties involving the modality, the separability, and the ease of optimisation dimension by dimension.In the current work, D-the number of variables of these problems-was fixed to 500.Additionally, another experiment was carried out using a set of nine rotated problems-f4-f6, f9-f11, and f14-f16-proposed by Tang et al. (2009).In this case, D was fixed to 1,000 decision variables.In what follows, we will refer to these problems as r4-r6, r9-r11, and r14-r16 to differentiate them from the above benchmarks.

Parameters
The experiments conducted used a common parameterisation for the diversity-based moea and the different parameter control schemes.Table 2 shows the parameterisation of the diversity-based moea described in Sect. 4. In the case of the f1-f19 benchmarks, the population size was fixed to five individuals since, in previous research carried out by the authors (Segura et al. 2013b), it was shown that the appli- cation of diversity-based moeas with said population size to those problems provided the best results in terms of the error achieved at the end of the executions.Other research has also demonstrated the suitability of eas with small population sizes to solving certain high-dimensional benchmarks (Olguin-Carbajal et al. 2013).In the case of the rotated problems, the population size was fixed to 50 individuals in order to better deal with premature convergence issues.Finally, the stopping criterion was fixed in keeping with the suggestions given in Lozano et al. ( 2011) for the f1-f19 benchmarks, and in Tang et al. (2009) for the rotated problems.The parameterisations of the different parameter control approaches are shown in Tables 3 and 4, for the flcs and the hyper-heuristics, respectively.The parameter values of the control methods-minimum selection rate, historical knowledge, number of low-level configurations, number of linguistic terms, etc-were the same regardless of the problem in question.This means that the control approaches proposed herein are robust, since for a wide range of test cases, promising results can be obtained without changing these parameter values.Thus, the parameters of the control methods do not place additional burdens on the configuration of the diversitybased moea.Note also that the hh-eli and hh-prob hyperheuristics were applied using six low-level configurations (n h = 6).Generally, and based on previous work by the authors, a high number of low-level configurations involve a decrease in the quality of the solutions obtained because the hyper-heuristic is not able to carry out the right decisions when a large set of candidate configurations is defined.That is why we selected six low-level configurations instead of assigning a larger value to the parameter n h .The only difference among the low-level configurations is the value assigned to the parameter th.The values are distributed uniformly in the range [0, 1].Thus, the low-level configurations are defined with values 0, 0.2, 0.4, 0.6, 0.8, and 1 for the parameter th.
Describing the parameterisation of the self-adaptive parameter control approaches is not necessary.Since no additional parameters are defined for them, their parameterisation is the same as in the case of the diversity-based moea (Table 2).
Finally, a single-objective ea (Single-EA) was also applied to study whether the controlled diversity-based moea offers any benefits with respect to said single-objective optimiser when solving single-objective problems.It was selected since it is very similar to the diversity-based moea considered herein and because it provided the best results for the majority of the f1-f19 benchmark problems in previous research carried out by the authors (Segura et al. 2013b).Said singleobjective approach applies an elitist-based generational survivor selection mechanism, i.e. all parents, except the fittest one, are discarded and replaced by the new offspring for the next generation.The remaining components and parameters were the same as those defined for the diversity-based moea (Sect.4; Table 2).
In order to carry out a fair comparison among the control approaches, all of the methods are run considering the same amount of function evaluations, as shown in Table 2 (stopping criterion).One of the main goals of this work is to validate diversity-based moeas, as well as the different control approaches proposed herein to control their parameters, by applying them to a set of well-known single-objective benchmarks.It would be interesting, however, to also apply the aforementioned schemes to more complex real-world applications, a significant number of which have execution times that are highly correlated to the number of function evaluations.Since the evaluation of an individual is the most time-consuming part of the entire optimisation scheme, the remaining operations that it must perform are insignificant in terms of the time invested.As a result, in this paper, the comparisons carried out only consider the number of function evaluations.6.1 Analysis of parameter control schemes over a short evaluation timeframe In the first experiment, the various control approaches are applied to the parameter th of the diversity-based moea in order to solve each of the benchmark functions f1-f19.The main aim of this study was to analyse the performance of the different parameter control schemes over a relatively short period of 5 × 10 5 evaluations of the objective functions.
Table 5 shows for each benchmark function the mean of the original objective value achieved by the different parameter control approaches after 5 × 10 5 evaluations.In the case of the different versions of the self-adaptive approach, only the data for the scheme which reached the lowest mean of the original objective value after 5 × 10 5 evaluations is shown.Furthermore, the results obtained by the single-objective ea are also shown.The schemes whose data are shown in bold with an asterisk obtained the lowest mean of the original objective value.It is important to note that these approaches exhibited statistically significant differences when compared to the control methods whose data are not shown in bold, based on the statistical procedure described earlier in this section.If the data from several approaches are shown in bold without an asterisk, then those schemes did not exhibit statistically significant differences versus those which obtained the lowest mean of the original objective value.We note the following observations: -One or both of the fuzzy-a and fuzzy-b flcs obtained a statistically significant lower mean with respect to the original objective than the remaining approaches in four problems: f1 and f5-f7.-One or both of the hh-eli and hh-prob hyper-heuristics obtained a statistically significant lower mean for the original objective value than the remaining schemes in six problems: f2-f4, f12, f15, and f16.-In five problems-f8-f10, f14, and f18-there were no statistically significant differences between the two types of external controllers.-The self-adaptive approaches and the single-objective ea did not provide any statistically significant advantage on their own in any of the benchmarks.
Thus, following a short evaluation period, the hyperheuristic-based schemes, and in particular the hh-eli approach, appear better able to adapt the parameter th than the parameter control approaches based on flcs and selfadaptation.This could be due to the fact that the hyperheuristic-based methods only have to select the most suitable value for the parameter th from among a finite set of candidate values that coarsely span the parameter space.In contrast, the flcs are able to select from an infinite range of possible values.Thus, given only a short learning period, the hyper-heuristics outperform the other methods as they are  able to exploit existing values that may lie close to the optimal ones, rather than having to explore the space for suitable values.As a result, the methods based on hyper-heuristics are able to provide better solutions than the flcs in the short term for a larger number of problems.
With respect to the self-adaptive approaches, although they did not outperform either of the other control approaches, for some problems they were able to achieve the same quality level as the other parameter control methods.In the case of problem f19, they outperformed the flcs together with the hyper-heuristics, and in the case of problem f11, they outperformed the hyper-heuristics together with the flcs.
Finally, in regard to the single-objective ea, it was outperformed by the diversity-based moea adapted by the different control methods in the majority of test cases, except for the f13 and f17 benchmarks, where it did not exhibit statistically significant differences from the hh-eli hyper-heuristic.

Analysis of parameter control schemes over a long evaluation time frame
In this section, the parameter control schemes are analysed over a longer evaluation period of 2.5 × 10 6 evaluations, i.e. at the end of the executions, considering the f1-f19 benchmark functions.Hence, Table 6 shows the same information as Table 5, but for 2.5 × 10 6 evaluations.We observe the following: -One or both of the fuzzy-a and fuzzy-b flcs obtained a statistically significant lower mean with respect to the original objective value than the remaining approaches in four problems: f3, f7, f12, and f14.-One or both of the hh-eli and hh-prob hyper-heuristics obtained a statistically significant lower mean for the original objective value than the remaining schemes in two problems: f2 and f4.-In 12 problems, there were no statistically significant differences between the two types of external controllers: f1, f5, f6, f8-f11, f13, f15, f16, f18, and f19.-The self-adaptive approaches, as well as the singleobjective ea, did not provide any statistically significant advantage on their own in any of the benchmarks.
It is clear that given a long enough evaluation time, the parameter control methods become less distinguishable.In the long evaluation period, the external method does not exhibit any significant differences for 12 cases, compared to five cases in the short evaluation period.It is also clear that given a longer running time, the flcs are able to provide the best results in a higher number of problems than hyperheuristics.Regarding the self-adaptive approaches, as in the short term, again they did not provide any benefit for any benchmark in the long term, and in fact were outperformed by the flcs and hyper-heuristic-based control schemes in every problem, except for the f8 function.Something similar happened in the case of the single-objective ea, which was outperformed by the diversity-based moea adapted by the different control schemes in almost every case, except for the f5 and f17 benchmark functions.

Comparison and analysis between short and long evaluation periods
It is illuminating to compare the change in performance in the methods applied to each of the problems under the two different evaluation scenarios, short term and long term.We define the winning method as the approach that outperforms each of the other schemes (statistically significant).Thus, the winner is either self-adaptation, hyper-heuristic, flc, Single-EA or none if no single method differs statistically from all of the others.If we then examine the change in the winning method as we switch from short-term to long-term evaluation, the following observations can be made: -For 11 problems-f2, f4, f7-f11, f13, and f17-f19changing the length of the evaluation period has no impact on the winning control method.-For five problems-f1, f5, f6, f15, and f16-while there is a clear winner in the short-term evaluation experiment, there is no significant difference between methods when evaluated over a longer time period.-Fortwo problems-f3 and f12-the best method switches from the hyper-heuristic in the short-term evaluation to the flcs in the long-term evaluation experiment.-For one problem-f14-while there is no significant difference in method in the short term, the flc emerges as the best method in the long term.
These results suggest that the coarse-grained approach of the hyper-heuristic, which defines a fixed set of possible values for the parameter th spread uniformly across the full range of possible values, provides sufficient variety to this parameter to yield high-quality results in most problems.The results also suggest that the flcs select similar values to those already defined by the hyper-heuristics and that the problems are relatively robust to the exact value of th over a certain interval.This is evidenced by the fact that in five problems, the performance of the hyper-heuristic and flcs converged given a long enough evaluation time.In addition, for 12 test cases, the hyper-heuristics and flcs did not present statistically significant differences at the end of the executions.
For three problems-f3, f12, and f14-the flc emerged as the winner in the long term, while either the hyper-heuristic was the clear winner or no method dominated in the short term.This suggests that these problems are particularly sensitive to the parameter th, and that the flc is able to find a value that yields better results, one that is not present in any of the hyper-heuristic configurations.
To summarise, the most appropriate parameter control approach depends on the optimisation problem being solved.However, for the majority of test cases, flcs or hyperheuristics can be applied to obtain promising results, whereas the self-adaptive approaches are not able to provide any advantage over the other control methods when adapting the parameter th.The results appear to substantiate the statement made in Eiben et al. (2007): "self-adaptive methods are efficient methods when applicable ... but are outperformed by clever adaptive methods".Finally, we should note that the advantages of using diversity-based moeas to solve single-objective problems are proven, since the diversitybased moea adapted by the different control approaches was able to statistically outperform the single-objective ea in most problems.

Comparison of parameter control methods to fixed parameters
The parameter control methods applied herein adapt the value of the parameter th during the course of an evolutionary run; thus, a single run of the optimisation algorithm may utilise many different values over the entire run.In this section, we assess the benefit of adapting the value of the parameter over the course of the run as opposed to simply choosing a single value that remains fixed throughout.The latter approach requires a suitable value for th to be defined; we define 21 different configurations of the diversity-based moea with the parameterisation shown in Table 2 for the f1-f19 functions.
Every configuration differs only in the value of th.The 21 values of th tested are distributed uniformly in the range [0, 1].
Figure 3 shows the mean of the original objective value achieved after 2.5×10 6 evaluations by the parameter control approaches and by the diversity-based moea executed within a range of fixed values of the parameter th-fixed-for several of the benchmark functions. 4The conclusions drawn from the plots shown can be generalised as follows.Observing the plots, at least one of the parameter control methods was able to obtain either similar or better results than any fixed configuration of the fixed approach in 12 of the 19 problems, namely in problems f3, f6, and f16 shown in Fig. 3, as well as in benchmarks f1, f4, f7, f10, f12-f15, and f19.Thus, it appears that the majority of benchmarks benefitted from an approach in which th can be varied during the course of the run and an optimal fixed value of th cannot be found.
On the other hand, in 7 of the 19 test cases-f2, f5, f8, f9, f11, f17, and f18-some configurations of the fixed scheme yielded better results than the parameter control approaches, suggesting that there exists some fixed value   for the parameter th that is adequate during the whole optimisation process.An alternative explanation, however, might lie in the fact that adapting th may improve the algorithm but that the changes in the values of the parameter th take place so fast that parameter control approaches are not able to detect the changes at the rate required.In this case, fixing the parameter to a suitable value produces more robust behaviour in the diversity-based moea.Despite this fact, the results obtained by the control techniques were close to those provided by the best configurations of the fixed approach for this set of seven problems.
It is crucial to note that finding a suitable fixed value for th required 21 separate runs of the optimisation algorithm.These results, however, are compared to a single run of the parameter control methods.Consequently, in addition to the fact that the parameter control methods obtain high-quality solutions for most of the problems, the savings in computational resources and time required to produce a good solution are significant across all problems.
In order to quantify these savings, we conducted an additional analysis that relied on Run-Length Distributions (rlds) (Hoos and Stützle 2005).rlds show the relationship between the success rate and the number of evaluations needed to achieve it, where the success rate of a particular approach is defined as its probability of achieving a certain quality level.In this case, we fix the quality level as the highest median of the original objective value achieved by the considered schemes at the end of the executions, i.e. at 2.5 × 10 6 evaluations.
We further calculate the percentage of evaluationspsaved by a certain scheme as compared to the approach that required the largest number of evaluations.This is calculated using Eq. ( 6), where the number of evaluations performed by the scheme considered is denoted by numEvals, and maxEvals is the largest number of evaluations performed by any approach, for a particular benchmark function.
The results are shown in Table 7.For each benchmark problem, the table gives the actual number of evaluations needed by the method that requires the highest number to reach a 50 % success rate.In every other column, the percentage of evaluations saved by the corresponding method is shown.In order to calculate the values, in the case of the self-adaptive control scheme, the approach which obtained the lowest mean of the original objective at the end of the executions was used.The single value of th that was found to be optimal in the experiments with the fixed values was also used.Data in bold highlight the approaches that were able to save the highest number of evaluations for each problem.Finally, the last row shows the mean percentage of saved evaluations for all the test cases.Note that for the majority of benchmarks, the self-adaptive approach invested the highest number of evaluations to achieve a 50 % success rate.The fixed scheme provided the greatest savings in evaluations in 13 benchmark functions.However, it is important to note that in order to find the appropriate value of th with which to run this experiment, 21 separate configurations had to be executed in order to select the best value for this parameter.This represents a significant hidden cost that is not apparent in this table.In contrast, the parameter control methods based on flcs and hyper-heuristics saved the largest number of evaluations in eight test cases and required only a single run of the algorithm.In six of these eight test cases, the fuzzy-b scheme provided the largest savings.
We should mention that the size of the savings in the case of the control approaches was also significant for the 13 test cases in which the fixed scheme provided the greatest savings.For instance, for the function f1, the fixed approach resulted in 42.86 % fewer evaluations, while the fuzzy-b scheme saved 40.82 %.This fact is also evidenced by the mean percentages of saved evaluations.The flcs obtained mean percentages that were quite close to the mean percentage of the fixed scheme.This demonstrates the advantages of using the proposed control approaches versus searching for a fixed value for the parameter th.

Evaluation of the control schemes with rotated problems
In the last experiment, the different control approaches are applied to the parameter th of the diversity-based moea in order to solve the rotated problems r4-r6, r9-r11, and r14-r16.Since, in this work, the adapted scheme is a diversitybased moea, it would be interesting to evaluate how said approach preserves diversity through its application to this set of rotated problems.Table 8 shows the same information as Tables 5 and 6.In this case, the self-adaptive approach is not shown because it did not provide any advantage in previous experiments.
Note that flcs and hyper-heuristics did not present statistically significant differences between them in any of the problems except for the r4 benchmark, where the hyperheuristics outperformed the flcs.Moreover, we should note that for most of the benchmarks, the diversity-based moea controlled by flcs and hyper-heuristics yielded better results than those given by the single-objective ea.Only in problems r9 and r15 did the latter not present statistically significant differences with the former.Hence, the ability of the controlled diversity-based moea to preserve a proper diversity in a set of solutions is demonstrated once more when compared to a single-objective approach.In addition we should mention that for some of the rotated problems, the adaptive diversity-based moea was able to provide better results than those given by certain schemes that were specifically designed for said problems, such as a Differential Evolution approach based on a neighbourhood search that promotes intensification with specialised mechanisms (Wang et al. 2010).

Conclusions and future work
Meta-heuristics are a set of approximation techniques that have shown promising performance when solving optimisation problems, although they often suffer from premature convergence, resulting in only local optima being found.In an effort to address this drawback, diversity-based moeas can be applied to single-objective problems, but this introduces a potential weakness by increasing the number of algorithm parameters that need to be tuned.Appropriate parameter setting is now recognised as a critical part of any meta-heuristic design.Parameter tuning approaches attempt to find an optimal set of parameters that remain fixed for the duration of the optimisation procedure.In contrast, parameter control approaches attempt to adapt the values of a parameter during the course of the optimisation, based on the assumption that different values are appropriate at different points in the search.
In this paper, we investigate the application of parameter control approaches to adapt the parameters of a diversitybased moea when it is applied to solve single-objective optimisation problems.Specifically, we attempt to control the parameter th of the auxiliary objective function dcn-thr.We present a novel parameter control method based on fuzzy logic and compare this to a previous method introduced by the authors that is based on hyper-heuristics.The main difference between the two methods lies in the fact that the hyper-heuristic approach demands that a fixed set of potential values for the parameter be pre-defined by a user, whereas the fuzzy logic approach is able to select any value within a range.These two methods of externally controlling the parameter are further compared to self-adaptation of the parameter through encoding in the chromosome.Additionally, the adaptive diversity-based moea is also compared to a singleobjective ea of similar characteristics.
Extensive testing on a wide set of benchmark functions revealed that both the hyper-heuristic and fuzzy logic methods are able to obtain results that are similar to or better than those obtained using a fixed parameter.Moreover, the savings in computational resources and time provided by both types of control schemes are significant, as it is not necessary to search for suitable values of th over multiple experiments.The fact that better results are obtained in many of the problems compared to the fixed methods also highlights that there is an advantage in adapting the parameter over the course of the run, i.e. through parameter control rather than parameter tuning.On the contrary, self-adaptive approaches did not provide any advantage over the other control strategies considered in terms of quality or length of time invested.Finally, we should note that the benefits of using the adaptive diversity-based moea to solve single-objective problems were also proven, as this method was able to outperform the single-objective ea used as the comparison approach in the majority of test cases.In fact, the results attained by the controlled diversity-based moea for some of the rotated problems were better than those provided by certain algorithms specifically designed to deal with said rotated problems, even though we did not incorporate specialised components to deal with these problems.
The flc proposed is novel in its use of parameter control, and furthermore in its use of multiple rule bases depending on feedback from the optimisation procedure.To the best of our knowledge, this is the first time an flc has been used to control the parameters of the auxiliary objective of a diversity-based moea.However, the method has a more general applicability and could be used to control other numeric parameters of other meta-heuristics in the future, includ-ing the multi-objective field.If instead of using the original objective value, the value of the input variable imp is given by some multi-objective performance metric, the flc can be applied to control the parameters of multi-objective algorithms.On the other hand, since the different rule bases were obtained through expert knowledge, it would be interesting to apply some kind of automatic learning mechanism in order to obtain the different fuzzy rule sets.This might improve the performance of the flcs.
Since our diversity-based approach does not make use of an intensification procedure and since said intensification schemes usually provide significant benefits, the fact that our diversity-based approach obtained better results than some specialised schemes is surprising and a clear indication of its promising behaviour.Consequently, the interactions between intensification procedures and diversity-based schemes should be analysed in the future.Finally, it would be interesting to apply our control proposals to adapt the parameters belonging to state-of-the-art algorithms that have provided the best results for the benchmarks considered herein.
Deterministic parameter control.Parameter values are altered by a deterministic rule without using any feedback from the search process.-Adaptive parameter control.Parameter values are updated by a mechanism which uses some feedback from the search process.This mechanism is externally supplied.-Self-adaptive parameter control.Parameters are encoded into the chromosome, and their values are modified by the ea variation operators.

Algorithm 1 flc pseudocode 1 :
Initialisation: Generate sample values for the parameter th distributed uniformly in the range [0, 1] considering a certain value as the difference between two consecutive samples 2: for (each generated sample value of the parameter th) do 3: Learning: Execute numGen generations of the diversity-based moea with said value for the parameter th in order to gather knowledge 4: end for 5: while (diversity-based moea stopping criterion is not satisfied) do 6: Calculation of input variables.Set the values for the input variables imp, var, th-in, best-th-in 7: Selection of the rule base.Select the most suitable rule base considering the last k decisions carried out by the flc and the scoring function shown in Equation 3. 8: Fuzzification.Transform the crisp values of the input variables to fuzzy sets using the fuzzification interface 9: Mamdani's fuzzy inference.Apply the fuzzy operator and (min), the implication method (min) and the aggregation method (max) using the selected rule base to obtain the fuzzy set of the output variable th-out.10: Defuzzification: Transform the fuzzy set of the output variable th-out to a crisp value th using the defuzzification interface (centroid method) 11: Parameter update: th = th + th .The value of th is enclosed in the range [0, 1] 12: Execution: Execute numGen generations of the diversity-based moea with the new value of th 13: end while are used to guide the adjustment of the parameter th.The pseudocode of the flc is shown in Algorithm 1.It operates as follows:

Fig. 1
Fig. 1 Membership functions of the input and output variables

Fig. 3
Fig. 3 Mean objective value achieved by the parameter control methods and by the diversity-based moea executed with fixed values of the parameter th 3

Table 1
Rule bases of the fuzzy-a (left-hand side) and fuzzy-b (right-hand side) schemes

Table 2
Parameterisation of the diversity-based moea

Table 5
Mean original objective value of best approaches after 5e5 evaluations

Table 6
Mean original objective value of best approaches after 2.5e6 evaluations

Table 7
Maximum number of evaluations needed to achieve the specified quality level and percentage of evaluations saved by each approach

Table 8
Mean original objective value of best approaches after 3e6 evaluations for the rotated problems