Genetic algorithm

format_list_bulleted Contenido keyboard_arrow_down
ImprimirCitar

An algorithm is a series of organized steps that describes the process that must be followed to solve a specific problem.

The 2006 antenna of NASA ST5 spacecraft. This complicated form was found by an evolutionary computer design program to create the best pattern of radiation. It is known as an evolved antenna.

In the 1970s, by the hand of John Henry Holland, one of the most promising lines of artificial intelligence emerged, that of genetic algorithms, (GA). They are called that because they are inspired by biological evolution and its genetic-molecular basis.

These algorithms make a population of individuals evolve by subjecting it to random actions, similar to those that act in biological evolution (mutations and genetic recombinations), as well as selection. According to some criterion, it is decided which are the most adapted individuals, which survive, and which are the least fit, which are discarded.

Genetic algorithms are part of evolutionary algorithms, which also include evolutionary strategies, evolutionary programming and genetic programming.

Introduction

Function

Genetic algorithms (GA) work between the set of solutions to a problem called a phenotype, and the set of individuals in a natural population, encoding the information from each solution in a string, usually binary, called a chromosome. The symbols that make up the string are called genes. When the representation of the chromosomes is made with strings of binary digits it is known as genotype. Chromosomes evolve through iterations, called generations. In each generation, the chromosomes are evaluated using some measure of fitness. The following generations (new chromosomes) are generated by applying the genetic operators repeatedly, these being the operators of selection, crossing, mutation and replacement.

When to use these algorithms

Genetic algorithms are of proven efficiency in case of wanting to calculate non-derivable functions (or very complex derivation) although their use is possible with any function.
The following considerations should also be taken into account:

  • If the function to optimize has many local maximums/minimums, more iterations of the algorithm will be required to "safe" the maximum/minimum global.
  • If the function to optimize contains several very close points in value to the optimal, we can only "secure" that we will find one of them (not necessarily the optimal).

How a basic genetic algorithm works

A genetic algorithm can present several variations, depending on how the replacement of individuals is decided to form the new population. In general, the pseudocode consists of the following steps:

Genetic algorithm i: initialization, f(X): evaluation, ?: term condition, Se: selection, Cr: crossing, Mu: mutation, Re: replacement, X*: better solution.
  • Initialization: The initial population is generated randomly, which is made up of a set of chromosomes which represent the possible solutions of the problem. In the event of failing to do so randomly, it is important to ensure that within the initial population, the structural diversity of these solutions is maintained to have a representation of most of the population possible or at least avoid premature convergence.
  • Evaluation: Each of the chromosomes of this population will apply the aptitude function to know how "good" is the solution that is being codified.
  • Status of term: The AG should be stopped when the optimal solution is reached, but it is generally not known, so other detention criteria should be used. Two criteria are usually used: run the AG a maximum number of iterations (generations) or stop it when there are no changes in the population. As long as the term condition is not fulfilled, the following is done:
    • Selection: After knowing the aptitude of each chromosome you proceed to choose the chromosomes that will be crossed in the next generation. chromosomes with better aptitude are more likely to be selected.
    • Recombination or cross-cutting: The recombination is the main genetic operator, represents sexual reproduction, operates on two chromosomes at the same time to generate two descendants where the characteristics of both parents chromosomes are combined.
    • Mutation: Modifies randomly part of the chromosome of the individuals, and allows to reach areas of the search space that were not covered by the individuals of the current population.
    • Replacement: Once genetic operators are applied, the best individuals are selected to form the population of the next generation.

Disadvantages and limitations

Among others we can mention:

  • For high complexity problems, the evaluation function can become too costly in terms of time and resources. For example there are cases in real life for which recreating a simulation of the solution proposed by an iteration can take many days and consume a lot of processing and associated resources.
  • There may be cases in which depending on the parameters used for the evaluation the algorithm might not converge in an optimal solution or end up in a premature convergence with unsatisfactory results (premature convergence might mean convergence in a local optimal or arbitrary point affecting long-term results).
  • It is said that they do not have a good scalability with complexity, for example for systems that are composed of many variables, components or elements their respective search space grows exponentially due, among other things, to the relationships that may arise, therefore the problem of the design of an aircraft must be broken down into simple representations, such as aerodynamic profiles, taking into account that the recombination of the elements can harm individual performance.
  • The "best" solution is only in comparison to other solutions so it is not too clear a criterion of when to stop as there is no specific solution.
  • It is not advisable to use them for problems that seek answers to problems that converge in simple solutions such as Correct/Incorrect as the algorithm will hardly converge and the result will be as valid as choosing it randomly.
  • The design, the creation of the aptitude function (fitness) and the selection of mutation criteria among others need some expertise and knowledge of the problem to obtain good results.

Applications

  • Automated design, including material design research and multi-objective design of automotive components: better shock behavior, weight savings, aerodynamic improvement, etc.
  • Automated design of industrial equipment.
  • Automated design of trading systems in the financial sector.
  • Construction of phylogenetic trees.
  • Optimization of container loading.
  • Design of water distribution systems.
  • Design of printed circuit topologies.
  • Computer network topology design.
  • In game theory, balance resolution.
  • Gene expression analysis.
  • Robot Behavior Learning.
  • Learning diffuse logic rules.
  • Language analysis, including grammar induction, and other aspects of natural language processing, such as the elimination of ambiguity of meaning.
  • Mobile communications network infrastructure.
  • Optimization of molecular structures.
  • Multi-criteria production planning.
  • Prediction.
  • Applying genetic algorithms to the iterated prisoner's dilemma.
  • Optimization of data compression systems, for example, using wavelets.
  • Prediction of protein folding.
  • Optimization of Layout.
  • Network optimization.
  • Prediction of RNA structure.
  • In bioinformatics, multiple sequence alignment.
  • Applications in industrial process planning, including job-shop planning.
  • Optimal selection of mathematical models for the description of biological systems.
  • Solid waste management.
  • Software engineering.
  • Construction of hours in large universities, avoiding class conflicts.
  • Traveler's problem.
  • Finding errors in programs.
  • Optimization of electricity production and distribution.
  • Geodetic network design (design tools).
  • Calibration and detection of damage to civil structures.
  • Calculation of star populations in complex star systems.

Methodology

Optimization issues

In a genetic algorithm, a population of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotypes) that can be mutated and altered. Traditionally, solutions are represented in binary as strings of 0's and 1's, but other encodings are possible as well.

Evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population at each iteration called a generation. In each generation, the fitness of each individual in the population is evaluated. Fitness is usually the value of the objective function in the optimization problem being solved. The fittest individuals are stochastically selected from the current population, and each individual's genome is modified (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when a maximum number of generations have occurred, or a satisfactory fitness level for the population has been reached.

A typical genetic algorithm requires:

  1. A genetic representation of the domain of the solution.
  2. A fitness function to evaluate the domain of the solution.

A standard representation of each candidate solution is as a bit matrix. Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations can also be used, but the implementation of chromosome crossing over is more complex in this case. Tree representations are explored in genetic programming and graph representations are explored in evolutionary programming. A mixture of both linear and tree chromosomes is explored in gene expression programming.

Once the genetic representation and fitness function are defined, a GA proceeds to initialize a population of solutions and then improve it by iterative application of the mutation, chromosomal crossover, inversion, and selection operators.

Initialization

The size of the population depends on the nature of the problem, but typically contains several hundred or thousands of possible solutions. Often the initial population is randomly generated, allowing for the full range of possible solutions (the search space). Occasionally, solutions can be "seeded" in areas where optimal solutions are likely to be found.

Selection

During each successive generation, a portion of the existing population is selected to raise a new generation. Individual solutions are selected through a fitness-based process, where conditioning solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods evaluate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the above process can be time consuming.

The fitness function is defined on the genetic representation and measures the quality of the represented solution. The fitness function always depends on the problem. For example, in the backpack problem, you want to maximize the total value of the objects that can be put in a backpack of some fixed capacity. A representation of a solution can be a bit array, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the backpack. Not all representations are valid, since the size of the objects may exceed the capacity of the backpack. The fitness of the solution is the sum of values of all the objects in the backpack if the representation is valid, or 0 otherwise.

In some problems, it is difficult or even impossible to define the expression of the physical condition. In these cases, simulation can be used to determine the fitness function value of a phenotype (for example, computational fluid dynamics is used to determine the air resistance of a vehicle whose shape is coded as a phenotype) or even interactive genetic algorithms.

Gene Operators

The next step is to generate a second-generation population of solutions selected through a combination of genetic operators: chromosomal crossing over (also called crossover or recombination) and mutation.

For each new solution that has been produced, a "parent" for the breeding of the previously selected group. When producing a "breeding" Using the above-mentioned chromosomal crossing over and mutation methods, a new solution is created that typically shares many of the characteristics of its 'parents'. New parents are selected for each new offspring, and the process continues until a new population of solutions of appropriate size is generated. Although breeding methods that rely on the use of two parents are more "biology inspired", some research suggests that more than two "parents" can generate higher quality chromosomes.

These processes ultimately result in the next generation of chromosome population, which is different from the initial generation. In general, the average fitness has been increased by this procedure for the population, since only the best organisms of the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the parental gene pool and thus ensure the genetic diversity of the next generation of offspring.

Opinion is divided on the importance of crossover versus mutation. There are many references in Fogel (2006) that support the importance of mutation-based searching.

Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction or migration in genetic algorithms.

It is worth adjusting parameters such as mutation probability, chromosome crossing-over probability, and population size to find reasonable fits for the kind of problem you are working on. A very small mutation rate can lead to genetic drift (which is non-ergodic in nature). A recombination rate that is too high can lead to premature convergence of the genetic algorithm. Too high a mutation rate can lead to the loss of good solutions, unless elitist selection is used.

Termination

This generational process is repeated until a termination condition is reached. Common termination conditions are:

  • There is a solution that meets the minimum criteria.
  • A fixed number of generations is reached.
  • The allocated budget is reached (time of calculation/money).
  • The aptitude of the solution of the highest classification is reaching or has reached a plateau such that successive iterations no longer produce better results.
  • Manual inspection.
  • Combinations of the previous ones.

The Building Block Hypothesis

Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular, it is difficult to understand why these algorithms often succeed in generating highly fit solutions when applied to practical problems. The building block hypothesis (BBH) consists of:

  • A description of a heuristic that performs the adaptation by identifying and recombining "building blocks", that is, low order, definition schemes of low length with an aptitude superior to the average.
  • A hypothesis that a genetic algorithm makes adaptation implicitly and efficiently implementing this heuristic.

Goldberg describes the heuristic as follows:

Short, low-order, and high-fit schemes are sampled, recombined (crossed), and resampled to form chains of potentially higher fitness. In a way, by working with these particular schematics (the building blocks), we have reduced the complexity of our problem. Instead of building high-performance chains by trying every conceivable combination, we build better and better chains from the best partial solutions of the last few samples.

"Because highly-tuned, low-length, low-order schemes play such an important role in the action of genetic algorithms, we've already given them a special name: building blocks. As a child creates magnificent fortresses through the arrangement of simple blocks of wood, so does a genetic algorithm seeking to approach optimal performance through the juxtaposition of short, low-order, high-performance schemes, or building blocks..&#3. 4;

Despite the lack of consensus regarding the validity of the building block hypothesis, it has been tested and used as a reference over the years. Many estimation distribution algorithms, for example, have been proposed in an attempt to provide an environment in which the hypothesis would hold. Although good results have been reported for some classes of problems, skepticism remains regarding the generality and/or practicality of the building block hypothesis as an explanation of GA efficiency. In fact, there is a reasonable amount of work trying to understand its limitations from the perspective of estimating distribution algorithms.

Limitations

There are limitations in using a genetic algorithm compared to alternative optimization algorithms:

  • The repeated evaluation of the fitness function for complex problems is often the most prohibitive and limiting segment of artificial evolutionary algorithms. Finding the optimal solution to complex three-dimensional and multimodal problems often requires very expensive conditioning function assessments. In real-world problems such as structural optimization problems, a single-function evaluation may require several hours to several full-time simulation days. Typical optimization methods cannot cope with this type of problem. In this case, it may be necessary to waive an accurate assessment and use an approximate aptitude that is computer-efficient. It is clear that the amalgamation of approximate models can be one of the most promising approaches to convincingly use AG to solve complex problems of real life.
  • Genetic algorithms are not suited to complexity. That is, when the number of elements exposed to mutation is large, there is often an exponential increase in the size of the search space. This makes it extremely difficult to use the technique in problems such as the design of a motor, a house or a plane. In order to make such problems treatable to evolutionary search, they must be broken down into the simplest possible representation. Therefore, we usually see evolutionary algorithms that encode designs for fan links instead of engines, construction forms instead of detailed construction plans and aerodynamic profiles instead of complete aircraft designs. The second problem of complexity is the question of how to protect parties that have evolved to represent good solutions of an additional destructive mutation, particularly when their assessment of fitness requires that they be well combined with other parties.
  • The "best" solution is only compared to other solutions. As a result, the stop criterion is unclear in each problem.
  • In many problems, the AGs may have a tendency to converge towards local optimal or even arbitrary points rather than the overall optimum of the problem. This means that it does not "know how" sacrifice short-term fitness to gain longer-term fitness. The likelihood that this happens depends on the shape of the fitness landscape: certain problems can provide an easy ascent to a global optimal, others can make it easier for the function to find the local optimal. This problem can be alleviated by the use of a different fitness function, the increase in the mutation rate, or by the use of selection techniques that maintain a diverse population of solutions [12], although free lunch theorem [13] demonstrates that there is no general solution to this problem. A common technique to maintain diversity is to impose a "small niche" in which any group of individuals of similar similar similarity (small niche) has an aggregate penalty, which will reduce the representation of that group in later generations, allowing others) To keep in the population. This trick, however, may not be effective, depending on the landscape of the problem. Another possible technique would simply replace part of the population with randomly generated individuals, when most of the population is too similar to each other. Diversity is important in genetic algorithms (and in genetic programming) because crossing a homogeneous population does not generate new solutions. In evolution strategies and evolutionary programming, diversity is not essential because of increased dependence on mutation.
  • Operating in dynamic data sets is difficult, as genomes begin to converge early towards solutions that are no longer valid for later data. Several methods have been proposed to remedy this, increasing genetic diversity in some way and avoiding early convergence, either by increasing the likelihood of mutation when the quality of the solution falls (called triggered hypermutation) or by occasionally introducing new randomly generated elements into the genetic set (called random immigrants). Once again, evolution strategies and evolutionary programming can be implemented with a so-called "coma strategy" in which parents are not kept and new parents are selected only from their offspring. This can be more effective in dynamic problems.
  • GAs cannot solve problems in which the only correct measure is a correct / incorrect measure (such as decision problems), as there is no way to converge in the solution (there is no hill to climb). In these cases, a random search can find a solution as quickly as an AG. However, if the situation allows the trial of success / failure to repeat itself (possibly) different results, then the proportion of successes to failures provides an adequate fitness measure.
  • For specific optimization problems and problematic instances, other optimization algorithms can be more efficient than genetic algorithms in terms of convergence speed. Alternative and complementary algorithms include evolution strategies, evolutionary programming, simulated cutting, gaussian adaptation, hill climbing and swarm intelligence (e.g.: ants optimization, particle swarm optimization) and methods based on entire linear programming. The suitability of genetic algorithms depends on the amount of knowledge of the problem; Well-known problems often have better and more specialized approaches.

Variants

Chromosome Representation

The simplest algorithm represents each chromosome as a string of bits. Normally, numeric parameters can be represented by integers, although it is possible to use floating point representations. Floating point representation is natural for evolution strategies and evolutionary programming. The notion of real-valued genetic algorithms has been offered but it is actually a misnomer because it does not really represent the building block theory that was proposed by John Henry Holland in the 1970s. This theory is not without support, however, on the basis of theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers that are indices in an instruction table, nodes in a linked list, hashes, objects, or any other imaginable data structure. Crossover and mutation are performed to respect the boundaries of the data elements. For most data types, specific variation operators can be designed. Different types of chromosome data seem to work better or worse for different specific problem domains.

When using bit number representations of integers, Gray encoding is often used. In this way, small changes in the integer can easily be affected by mutations or crossovers. This has been found to help prevent premature convergence in so-called Hamming walls, in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution.

Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. The schema theory results suggest that in general, the smaller the alphabet, the better the performance, but it was initially surprising to the researchers that good results were obtained using real value chromosomes. This was explained as the set of real values in a finite population of chromosomes as the formation of a virtual alphabet (when selection and recombination are dominant) with much lower cardinality than would be expected from a floating point representation [14 ] [fifteen]

An expansion of the accessible genetic algorithm domain problem can be obtained through the more complex coding solution of clustering by concatenating several types of genes heterogeneously encoded on a chromosome. [16] This particular approach allows solving optimization problems that require highly disparate definition domains for the problem parameters. For example, in cascading controller tuning problems, the inner loop controller structure may belong to a conventional three-parameter controller, while the outer loop could implement a linguistic controller (such as a fuzzy system) that has a inherently different description. This particular form of coding requires a specialized crossover mechanism that recombines the chromosome by section, and is a useful tool for modeling and simulating complex adaptive systems, especially evolutionary processes.

Elitism

A practical variant of the general process of building a new population is to allow the best organisms from the current generation to carry over to the next, undisturbed. This strategy is known as elitist selection and guarantees that the quality of the solution obtained by the GA will not decrease from one generation to the next.

Parallel Implementations

Parallel implementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population in each of the computing nodes and the migration of individuals between the nodes. Parallel genetic algorithms assume an individual in each processor node that acts with neighboring individuals for selection and reproduction. Other variants, such as genetic algorithms for online optimization problems, introduce time dependence or noise into the fitness function.

Adaptive Genetic Algorithm

Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The crossover probabilities (pc) and mutation probabilities (p.m.) largely determine the degree of solution accuracy and the speed of convergence that genetic algorithms can achieve. Instead of using fixed values of pc and p. m., the GAs use the population information in each generation and adaptively adjust the pc and p. m. in order to maintain the diversity of the population as well as to sustain the capacity for convergence. In AGA (adaptive genetic algorithm), the adjustment of pc and p. m. depends on the fitness values of the solutions. In CAGA (clustering-based adaptive algorithm), through the use of clustering analysis to judge the optimization states of the population, the fit of pc and p. m. depends on these optimization states. It can be very effective to combine GA with other optimization methods. GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the latest mutations to find the absolute optimum. Other techniques (such as simple hill climbing) are quite efficient in finding the absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA[citation needed], while overcoming the unreliability of hill climbing.

This means that genetic variation rules may have a different meaning in the natural case. For example - as long as the steps are stored in consecutive order - crossover can add a series of maternal DNA steps by adding a series of paternal DNA steps and so on. This is like adding vectors that are most likely to follow a ridge in the phenotypic landscape. Therefore, the efficiency of the process can be increased by many orders of magnitude. In addition, the reversal trader has the opportunity to place the steps in consecutive or any other suitable order in favor of survival or efficiency.

A variation, in which the population as a whole develops rather than its individual members, is known as recombination of gene pools.

A series of variations have been developed to try to improve the performance of GAs in problems with a high degree of fitness epistasis, that is, when the fitness of a solution consists of interacting subsets of its variables. Such algorithms aim to learn (before exploiting) these beneficial phenotypic interactions. As such, they are aligned with the building block hypothesis in the adaptive reduction of disruptive recombination.

Problem domains

Problems that appear to be particularly suitable for solution by genetic algorithms include programming problems, and many programming software packages are based on GAs. GAs have also been applied to engineering. Genetic algorithms are often applied as an approach to solving global optimization problems.

As a general rule, genetic algorithms can be useful in problem domains that have a complex fitness landscape, since admixture, i.e., mutation in combination with crossover, is designed to move the population away from the local optimum than a traditional algorithm. Note that the commonly used crossover operators cannot change any uniform population. The mutation alone can provide ergodicity of the overall process of the genetic algorithm (viewed as a Markov chain).

History

In 1950, Alan Turing proposed a "learning machine" which would be parallel to the principles of evolution. Computer simulation of evolution began as early as 1954 with the work of Nils Aall Barricelli, who was using the computer at the Institute for Advanced Study in Princeton, New Jersey. His 1954 publication was not widely noted. Beginning in 1957, the Australian quantitative geneticist Alex Fraser published a series of papers on the simulation of artificial selection of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and methods were described in books by Fraser and Burnell (1970) and Crosby (1973). Fraser's simulations included all the essential elements of modern genetic algorithms. In addition, Hans-Joachim Bremermann published a series of papers in the 1960s that also adopted population optimization problems, subject to recombination, mutation, and selection. Bremermann's research also included elements of modern genetic algorithms. Other notable early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many of the early articles have been reprinted by Fogel (1998).

Although Barricelli, in his work reported in 1963, had simulated the evolution of the ability to play a simple game, artificial evolution became a widely recognized optimization method as a result of the work of Ingo Rechenberg and Hans-Paul Schwefel in the 1960s and early 1970s - Rechenberg's group was able to solve complex engineering problems through evolutionary strategies. Another approach was Lawrence J. Fogel's evolutionary programming technique, which was proposed to generate artificial intelligence. Evolutionary programming originally used finite state machines to predict environments and used variation and selection to optimize predictive logics. Genetic algorithms, in particular, became popular through the work of John Henry Holland in the early 1970s, and particularly his book Adaptation in Natural and Artificial Systems (1975). His work originated with studies of cellular automata, conducted by Netherlands and his students at the University of Michigan. The Netherlands introduced a formalized framework for predicting the quality of the next generation, known as the Netherlands schema theorem. Research on GAs remained largely theoretical until the mid-1980s, when the first international conference on genetic algorithms was held in Pittsburgh, Pennsylvania.

Commercial products

In the late 1980s, General Electric began selling the world's first genetic algorithm product, a mainframe-based toolbox designed for industrial processes. In 1989, Axcelis, Inc. released Evolver, the first commercial AG product for desktop computers. New York Times technology writer John Markoff wrote about Evolver in 1990, and it remained the only interactive commercial algorithm until 1995. Evolver was sold to Palisade in 1997, translated into several languages, and is currently in its sixth version.

Example

Consider the problem of inserting plus or subtraction signs between the digits 9 8 7 6 5 4 3 2 1 to build an expression whose value is 100; it is not worth adding, subtracting or messing up the digits. A solution can be 98+7-6+5-4+3-2-1 because, when evaluating its value, it gives exactly 100.

There are 9 places (one before each digit) to put a plus sign (+), minus sign (-), or nothing (which will mean that two or more digits join to make larger numbers, like 98 in the example above, where "nothing" was put before the 9 and before the 8). There are, therefore, 3 to 9 = 19683 different expressions (by combinatorics) among which to search for those with the desired sum. It is possible to develop an exhaustive algorithm that generates them all and, given the relative smallness of this example, fully determine all possible configurations. However, this challenge lends itself to illustrating the steps of a genetic process.

Encoding

There is a bijection between the strings of "trits" (ternary digits: 0, 1, 2) of length 9 and the possible sign configurations for each sum. For example, if the convention is adopted that "0" represents "nothing", "1" represents "less" and "2" represents "more", then the string of "trits" 002121211 corresponds to the solution 98+7-6+5-4+3-2-1.

Related Techniques

Parent fields

Genetic algorithms are a subfield of:

  • Evolutionary algorithms
  • Evolutionary Computation
  • Metaheuristics
  • Stochastic optimization
  • Improvement

Related fields

Evolutionary algorithms

Evolutionary algorithms are a subfield of evolutionary computing.

  • Evolution strategies (ES, see Rechenberg, 1994) evolve to individuals through mutation and intermediate or discreet recombination. The ES algorithms are specially designed to solve problems in the real value domain. They use self-adaptation to adjust the search control parameters. The de-alatorization of self-adaptation has led to the current Covariance Matrix Adaptation Evolution Strategy (CMA-ES).
  • Evolutionary programming (EP) involves populations of solutions with mutation and selection and arbitrary representations. It uses self-adaptation to adjust the parameters, and may include other variation operations, such as combining information from multiple parents.
  • The estimation of the distribution algorithm (EDA) replaces traditional reproduction operators by model-guided operators. These models are learned from the population using automatic learning techniques and represented as Probabilistic Graphics Models, from which new [45] [46] solutions can be displayed or generated from guided chromosomal intersection [47].
  • Genetic expression programming (GEP) also uses computer software populations. These complex computer programs are encoded in simpler linear chromosomes of fixed length, which are then expressed as expression trees. Trees of expression or software evolve because chromosomes experience mutation and recombination in a manner similar to canonical GA. But thanks to the special organization of the GEP chromosomes, these genetic modifications always give rise to valid software. [48]
  • Genetic programming (GP) is a related technique popularized by John Koza in which computer programs are optimized, rather than functional parameters. Genetic programming often uses tree-based internal data structures to represent adaptation software rather than the typical list structures of genetic algorithms.
  • The genetic algorithm grouping (GGA) is an evolution of the AG where the focus shifts from the individual elements, such as the classic AGs, groups or subsets of elements. [49] The idea behind this evolution of the GA proposed by Emanuel Falkenauer is that the solution of some complex problems, such as grouping or partition problems where a set of elements should be divided into a group of disjointed elements in an optimal way, would be best achieved by taking characteristics of the groups of articles equivalent to genes. This type of problem includes container packaging, line balance, grouping with respect to a distance measure, equal batteries, etc., in which the classic GA proved a bad performance. Making genes equivalate to groups involves chromosomes that are generally variable length, and special genetic operators that manipulate whole groups of elements. For container packaging, in particular, a GGA hybridized with the Martello and Toth Dominance Criterion, is possibly the best technique to date.
  • Interactive evolutionary algorithms are evolutionary algorithms that use human evaluation. They are usually applied to domains where it is difficult to design a computer fitness function, for example, the evolution of images, music, artistic designs and ways to adjust to the aesthetic preference of users.

Swarm Intelligence

  • The optimization of the ants colony (ACO) uses many ants (or agents) equipped with a pheromone model to travel the solution space and find local productive areas. An estimate of the distribution algorithm is considered.
  • Particle swarm optimization (PSO) is a computational method for multiparametric optimization that also uses a population-based approach. A population (swarm) of candidate solutions (particles) moves in the search space, and the movement of particles is influenced both by its best known position and by the best known global position of the swarm. Like genetic algorithms, the PSO method depends on the exchange of information among population members. In some problems, the PSO is often more computer-efficient than the AGs, especially in unrestricted problems with continuous variables.

Other evolutionary computing algorithms

  • The Memetic algorithm (MA), often called hybrid genetic algorithm, among others, is a population-based method in which solutions are also subject to local improvement phases. The idea of memetic algorithms, which unlike genes, can adapt. In some problematic areas it is shown that they are more efficient than traditional evolutionary algorithms.
  • Bacteriological algorithms (BA) are inspired by evolutionary ecology and, more specifically, bacteriological adaptation. Evolutionary ecology is the study of living organisms in the context of their environment, with the aim of discovering how they adapt. Its basic concept is that in a heterogeneous environment, there is no individual who fits the entire environment. Therefore, one has to reason at the population level. It is also believed that BAsics could successfully apply to complex positioning problems (cell phone antennas, urban planning, etc.) or data mining. [52]
  • The cultural algorithm (CA) consists of the population component almost identical to that of the genetic algorithm and, in addition, a component of knowledge called the space of belief.
  • The differential search algorithm (DS) is inspired by superorganism migration.
  • The gaussian adaptation (normal or naturaladaption, abbreviated NA to avoid confusion with GA) is intended to maximize the manufacturing performance of signal processing systems. It can also be used for ordinary parametric optimization. It is based on a certain valid theorem for all regions of acceptability and all Gaussian distributions. The efficiency of NA depends on the theory of information and a certain theorem of efficiency. Its efficiency is defined as information divided by the work necessary to obtain the information. Because NA maximizes the average aptitude more than the aptitude of the individual, the landscape softens so that the valleys between peaks can disappear. Therefore, it has a certain "ambition" to avoid local peaks in the fitness landscape. NA is also good for climbing sharp crests by adapting the matrix of moments, because NA can maximize the disorder (average information) of the Gaussian while simultaneously maintaining the constant average aptitude.

Other metaheuristic methods

  • The simulated collection (SA) is a related global optimization technique that crosses the search space by testing random mutations in an individual solution. A mutation that increases fitness is always accepted. A mutation that decreases the aptitude is accepted probabilistically based on the aptitude difference and a decreasing temperature parameter. In the SA jargon, it is spoken of looking for the lowest energy instead of the maximum aptitude. SA can also be used within a standard GA algorithm starting with a relatively high mutation rate and decreasing over time along a given schedule.
  • The taboo search (TS) is similar to the simulated collection in which both cross the space of the solution by testing mutations of an individual solution. While simulated harvesting generates only one mutated solution, taboo search generates many mutated solutions and moves to the solution with the lowest energy of those generated. In order to avoid loops and encourage greater movement through the solution space, a taboo list of partial or complete solutions is maintained. It is forbidden to go to a solution containing elements of the taboo list, which is updated as the solution passes through the solution space.
  • Extreme Optimization (EO), unlike the GAs, which work with a population of candidate solutions, develops a single solution and makes local modifications to the worst components. This requires that an appropriate representation be selected to allow the individual solution components to be assigned a quality measure ("attitude"). The principle of government behind this algorithm is that of the emerging improvement by selective removal of low-quality components and their replacement by a randomly selected component. This is decisively at variance with a GA that selects good solutions in an attempt to make better solutions.

Other stochastic optimization methods

  • The crude entropy (CE) method generates candidate solutions through a parametric probability distribution. The parameters are updated by minimizing cross entropy, to generate better samples in the next iteration.
  • Reactive search optimization (RSO) advocates the integration of sub-symbolic learning techniques into the search heuristic to solve complex optimization problems. The reactive word suggests a quick response to events during the search through an in-house online feedback loop for the self-adjustment of critical parameters. Methodologies of interest for reactive search include mechanical learning and statistics, including reinforcement learning, active or consultation learning, neuronal networks and metaheuristics.

Contenido relacionado

Wine

Wine is a reimplementation of the Application Programming Interface Win16 and Win32 for Unix-based operating systems. It allows the execution of programs...

Princess of Asturias Award for Scientific and Technical Research

The Princess of Asturias Awards for Scientific and Technical Research have been awarded since 1981 to the person whose discoveries or research work represent...

Ontology (computing)

In computer science and communication science, an ontology is a formal definition of types, properties, and relationships between entities that actually or...
Más resultados...
Tamaño del texto:
Copiar