<h1 align = "center">Differential Evolution - A simple and efficient adaptive scheme for global optimization over continuous spaces</h1>
by Rainer Storn and Kenneth Price TR-95-012
March 1995
abstract
A new heuristic approach for minimizing possibly nonlinear and non differentiable continuous space functions is presented. By means of an extensive testbed, which includes the De Jong functions, it will be demonstrated that the new method converges faster and with more certainty than Adaptive Simulated Annealing as well as the Annealed Nelder&Mead approach, both of which have a reputation for being very powerful. The new method requires few control variables, is robust, easy to use and lends itself very well to parallel computation.
這是一個(gè)求解連續(xù)空間非線性不可導(dǎo)函數(shù)最小值的新的啟發(fā)式解法非凌。通過廣泛的實(shí)驗(yàn)荆针,包括使用De Jong 函數(shù)測試并蝗,可以證實(shí)新的方法比起被廣泛認(rèn)為很有效的自適應(yīng)模擬退火和Nelder&Mead退火方法 ,收斂更快并且更準(zhǔn)確滚停。新的方法幾乎不需要控制變量粥惧,適應(yīng)性強(qiáng)键畴,使用簡便并且適于并行計(jì)算突雪。
De Jong函數(shù)見http://www.geatbx.com/docu/fcnindex-01.html
The simplest test function is De Jong's function 1. It is also known as sphere model. It is continuos, convex and unimodal.
function definition:
f1(x)=sum(x(i)^2), i=1:n, -5.12<=x(i)<=5.12.
global minimum:
f(x)=0, x(i)=0, i=1:n.
introduction
Problems which involve global optimization over continuous spaces are ubiquitous throughout the scientific community. In general, the task is to optimize certain properties of a system by pertinently choosing the system parameters. For convenience, a system's parameters are usually represented as a vector. The standard approach to an optimization problem begins by designing an objective function that can model the problem's objectives while incorporating any constraints. Especially in the circuit design community, methods are in use which do not need an objective function [1], [2], [3]. Although these methods can make formulating a problem simpler, they are usually inferior to techniques which make full use of an objective function. Consequently, we restrict ourselves to optimization methods which fully use the objective function. In most cases, the objective function is designed to transform the optimization problem into a minimization task. To this end, we will limit our investigation in the following to minimization problems.
連續(xù)空間全局優(yōu)化的問題在整個(gè)學(xué)術(shù)界無所不在咏删。任務(wù)通常是通過選擇恰當(dāng)參數(shù)優(yōu)化系統(tǒng)特定的特征。為了方便督函,系統(tǒng)的參數(shù)通常被表示為一個(gè)向量。解決優(yōu)化問題的標(biāo)準(zhǔn)方法往往是在問題約束內(nèi)對目標(biāo)問題建模設(shè)計(jì)目標(biāo)函數(shù)锋叨。尤其是在環(huán)形群體宛篇?娃磺,解決問題不需要目標(biāo)函數(shù)叫倍。即使這些方法可以使得建模一個(gè)問題簡單,他們比起充分利用目標(biāo)函數(shù)的方法要低劣吆倦。因此,我們采用充分利用目標(biāo)函數(shù)的優(yōu)化方法逼庞。在大多數(shù)情況下瞻赶,優(yōu)化函數(shù)能把優(yōu)化問題轉(zhuǎn)化為最小化問題。因此砸逊,我們會(huì)把我們的探究轉(zhuǎn)為最小化問題。
When the objective function is nonlinear and non differentiable, direct search approaches are the methods of choice. The best known of these are the algorithms by Nelder&Mead [4], by Hooke&Jeeves [4], genetic algorithms [5], and evolutionary algorithms [6], [7] with the latter being truly continuous counterparts of genetic algorithms. At the heart of every direct search method is a strategy that generates variations of the parameter vectors. Once a variation is generated, a decision must be made whether or not to accept the newly derived parameters. All basic direct search methods use the greedy criterion to make this decision. Under the greedy criterion, a new parameter vector is accepted if and only if it reduces the value of the objective function. Although the greedy decision process converges fairly fast, it runs the risk of becoming trapped by local minimum. Inherently parallel search techniques like genetic and evolutionary algorithms have some built-in safeguards to forestall misconvergence.
當(dāng)目標(biāo)函數(shù)是非線性并且不可導(dǎo)司倚,直接搜索法是最好的方法。這其中最讓我們熟知的是Nelder&Mead的算法动知,Hooke&Jeeves的算法,遺傳算法和演化算法盒粮,演化算法是遺傳算法的后來發(fā)展的對應(yīng)物。所有直接搜索法的核心都是從參數(shù)向量中產(chǎn)生變異妒穴。一旦變異產(chǎn)生,必須要決定是否接受這個(gè)新的產(chǎn)生的參數(shù)讼油。所有的基礎(chǔ)的直接搜索法都采用貪心算法做這個(gè)決定呢簸。在貪心算法下矮台,新的參數(shù)向量只有在減少目標(biāo)函數(shù)值時(shí)才會(huì)被采納阔墩。即使貪心決策過程收斂迅速,它存在被存在局部最小的風(fēng)險(xiǎn)啸箫。實(shí)際上相似的搜索策略如遺傳算法和演化算法有很多內(nèi)建機(jī)制預(yù)防錯(cuò)誤收斂。
By running several vectors simultaneously, superior parameter configurations can help other vectors escape local minima. Another method which can extricate a parameter vector from a local minimum is Simulated Annealing [8], [9], [10]. Annealing relaxes the greedy criterion by occasionally permitting an uphill move. Such moves potentially allow a parameter vector to climb out of a local minimum. As the number of iterations increases, the probability of accepting an uphill move decreases. In the long run, this leads to the greedy criterion. While all direct search methods lend themselves to annealing, it has mostly been used just for the Random Walk, which itself is the simplest case of an evolutionary algorithm [6]. Nevertheless, attempts have been made to anneal other direct searches like the method of Nelder&Mead [10] and genetic algorithms [8], [11].
通過同時(shí)處理多組向量蝉娜,好的參數(shù)配置可以更好的幫助其他向量逃離局部最小扎唾。其他可以幫助參數(shù)向量逃離局部最小的模擬退火。退火通過偶然允許目標(biāo)函數(shù)值擴(kuò)大的移動(dòng)放松了貪心策略胸遇。這樣的移動(dòng)潛在的允許參數(shù)向量離開局部最小。隨著迭代次數(shù)增加纸镊,接受上行移動(dòng)的概率下降。長期來看逗威,這趨近于貪心策略。當(dāng)所有直接搜索都參與模擬退火凯旭,他們幾乎可以看做隨機(jī)游走使套,這是最簡單的進(jìn)化算法鞠柄。然而,也有對其他直接搜索模擬退火的嘗試春锋,如Nelder&Mead和遺傳算法符匾。