背景描述:編寫Adversarial Autoencoder的代碼扳肛,enocder部分的權(quán)值在對(duì)抗訓(xùn)練時(shí)會(huì)接受來(lái)自reconstruct和discriminator兩個(gè)方面的gradient,在初始化不同損失函數(shù)時(shí)煮落,會(huì)出現(xiàn)問(wèn)題敞峭。
問(wèn)題代碼:
# initialize optimizers
self.loss_encoder_decoder, self.opt_encoder_decoder = self.optimizer_encoder_decoder()
self.loss_discriminator, self.opt_discriminator = self.optimizer_discriminator()
self.loss_encoder, self.opt_encoder = self.optimizer_encoder()
tensorflow錯(cuò)誤輸出:
ValueError: Variable AAE/Encoder/layer_0/W/Adam/ already exists, disallowed. Did you mean to set reuse=True in VarScope?
其中各個(gè)函數(shù)如下,問(wèn)題出現(xiàn)在上面代碼的第三行蝉仇。
def optimizer_encoder_decoder(self):
vars = self.encoder.vars
vars.extend(self.decoder.vars)
optimizer = tf.train.AdamOptimizer(learning_rate=self.learn_rate)
return loss, optimizer.minimize(loss, var_list=vars)
def optimizer_discriminator(self):
optimizer = tf.train.AdamOptimizer(learning_rate=self.learn_rate)
return loss, optimizer.minimize(loss, var_list=self.disor.vars)
def optimizer_encoder(self):
optimizer = tf.train.AdamOptimizer(learning_rate=self.learn_rate)
return loss, optimizer.minimize(loss, var_list=self.encoder.vars)
問(wèn)題猜想:
報(bào)錯(cuò)的地方會(huì)產(chǎn)生一些“影子變量”用于optimizer對(duì)變量的更新(求出來(lái)的梯度旋讹?),故想辦法改變這些“影子變量”的scope就可以轿衔。
解決方法:
類AdamOptimizer
創(chuàng)建的時(shí)候傳入name
參數(shù)(不使用默認(rèn)的參數(shù))沉迹,比如對(duì)于第三個(gè)函數(shù)optimizer_encoder
:
def optimizer_encoder(self):
optimizer = tf.train.AdamOptimizer(learning_rate=self.learn_rate, name='Adam_en')
return loss, optimizer.minimize(loss, var_list=self.encoder.vars)