優(yōu)化技術(shù)特別擅長(zhǎng)處理:受多種變量的影響,存在許多種可能的解的問(wèn)題,以及結(jié)果因這些變量的組合而產(chǎn)生很大變化的問(wèn)題绰播。比如組團(tuán)旅游問(wèn)題花枫,宿舍分配問(wèn)題等刻盐。
優(yōu)化問(wèn)題需要明確什么是最重要的因素,找到最重要的因素劳翰,需要組合在一起形成一個(gè)值敦锌,即成本函數(shù)。有了成本函數(shù)佳簸,下面介紹幾種常用優(yōu)化算法乙墙。
隨機(jī)搜索(Random searching)
隨機(jī)搜索算法不是一個(gè)非常好的優(yōu)化算法,但是它使我們更易理解別的算法生均,也是評(píng)估其它算法的baseline听想。比較低效,沒(méi)有利用已發(fā)現(xiàn)的優(yōu)解疯特。
def randomoptimize(domain,costf):
best=999999999
bestr=None
for i in range(0,1000):
# Create a random solution
r=[float(random.randint(domain[i][0],domain[i][1]))
for i in range(len(domain))]
# Get the cost
cost=costf(r)
# Compare it to the best one so far
if cost<best:
best=cost
bestr=r
return r
爬山法(hill climbing)
從一個(gè)隨機(jī)解開(kāi)始哗魂,然后在其臨近的點(diǎn)的解集中尋找更好的解集。容易陷入局部最優(yōu)解漓雅,可以使用隨機(jī)重復(fù)爬山法录别,爬山法以隨機(jī)生成的多個(gè)初始值為起點(diǎn)運(yùn)行若干次。
def hillclimb(domain,costf):
# Create a random solution
sol=[random.randint(domain[i][0],domain[i][1])
for i in range(len(domain))]
# Main loop
while 1:
# Create list of neighboring solutions
neighbors=[]
for j in range(len(domain)):
# One away in each direction
if sol[j]>domain[j][0]:
neighbors.append(sol[0:j]+[sol[j]+1]+sol[j+1:])
if sol[j]<domain[j][1]:
neighbors.append(sol[0:j]+[sol[j]-1]+sol[j+1:])
# See what the best solution amongst the neighbors is
current=costf(sol)
best=current
for j in range(len(neighbors)):
cost=costf(neighbors[j])
if cost<best:
best=cost
sol=neighbors[j]
# If there's no improvement, then we've reached the top
if best==current:
break
return sol
模擬退火算法(simulated annealing)
受物理領(lǐng)域的啟發(fā)邻吞,不僅僅接收更優(yōu)的解组题,也接收表現(xiàn)較差的解。隨著退火過(guò)程的不斷進(jìn)行抱冷,算法越來(lái)越不可能接收較差的解崔列,直到最后,只會(huì)接收更優(yōu)的解,被接受的概率公式如下:
剛開(kāi)始溫度比較高赵讯,概率接近于1盈咳,隨著溫度的降低,高成本值和低成本值的差值越來(lái)越重要边翼,差異越大鱼响,概率越低,因此算法傾向于較差的值组底,不會(huì)是非常差的值丈积。
def annealingoptimize(domain,costf,T=10000.0,cool=0.95,step=1):
# Initialize the values randomly
vec=[float(random.randint(domain[i][0],domain[i][1]))
for i in range(len(domain))]
while T>0.1:
# Choose one of the indices
i=random.randint(0,len(domain)-1)
# Choose a direction to change it
dir=random.randint(-step,step)
# Create a new list with one of the values changed
vecb=vec[:]
vecb[i]+=dir
if vecb[i]<domain[i][0]: vecb[i]=domain[i][0]
elif vecb[i]>domain[i][1]: vecb[i]=domain[i][1]
# Calculate the current cost and the new cost
ea=costf(vec)
eb=costf(vecb)
p=pow(math.e,(-eb-ea)/T)
# Is it better, or does it make the probability
# cutoff?
if (eb<ea or random.random()<p):
vec=vecb
# Decrease the temperature
T=T*cool
return vec
該算法在減少執(zhí)行時(shí)間方便也表現(xiàn)不俗。
遺傳算法(genetic algorithm)
受自然科學(xué)的啟發(fā)债鸡。該算法運(yùn)行前要先隨機(jī)生成一組解江滨,稱為種群(population)
。
先對(duì)題解進(jìn)行排序厌均,把當(dāng)前最頂端的題解加入其所在的新種群中唬滑,該方法稱為精英選拔法(elitism)
。新種群中余下的部分是通過(guò)變異
和交叉
得到莫秆。
變異(mutation)
對(duì)一個(gè)即有解進(jìn)行微小的间雀、簡(jiǎn)單的、隨機(jī)的變化镊屎。
交叉(crossover)
選取最優(yōu)解中的兩個(gè)解惹挟,然后按照某種方式組合得到。
def geneticoptimize(domain,costf,popsize=50,step=1,
mutprob=0.2,elite=0.2,maxiter=100):
# Mutation Operation
def mutate(vec):
i=random.randint(0,len(domain)-1)
if random.random()<0.5 and vec[i]>domain[i][0]:
return vec[0:i]+[vec[i]-step]+vec[i+1:]
elif vec[i]<domain[i][1]:
return vec[0:i]+[vec[i]+step]+vec[i+1:]
# Crossover Operation
def crossover(r1,r2):
i=random.randint(1,len(domain)-2)
return r1[0:i]+r2[i:]
# Build the initial population
pop=[]
for i in range(popsize):
vec=[random.randint(domain[i][0],domain[i][1])
for i in range(len(domain))]
pop.append(vec)
# How many winners from each generation?
topelite=int(elite*popsize)
# Main loop
for i in range(maxiter):
scores=[(costf(v),v) for v in pop]
scores.sort()
ranked=[v for (s,v) in scores]
# Start with the pure winners
pop=ranked[0:topelite]
# Add mutated and bred forms of the winners
while len(pop)<popsize:
if random.random()<mutprob:
# Mutation
c=random.randint(0,topelite)
pop.append(mutate(ranked[c]))
else:
# Crossover
c1=random.randint(0,topelite)
c2=random.randint(0,topelite)
pop.append(crossover(ranked[c1],ranked[c2]))
# Print current best score
print scores[0][0]
return scores[0][1]
詳細(xì)內(nèi)容參見(jiàn)集體智慧缝驳。