hello, xlvector. I read your implementation, there is some parts I could not full understand. I am not sure whether you could give me more detailed descriptions for the codes below :
class NceAuc(mx.metric.EvalMetric):
def __init__(self):
super(NceAuc, self).__init__('nce-auc')
def update(self, labels, preds):
label_weight = labels[1].asnumpy()
preds = preds[0].asnumpy()
tmp = []
for i in range(preds.shape[0]):
for j in range(preds.shape[1]):
tmp.append((label_weight[i][j], preds[i][j]))
tmp = sorted(tmp, key = itemgetter(1), reverse = True)
m = 0.0
n = 0.0
z = 0.0
k = 0
for a, b in tmp:
if a > 0.5:
m += 1.0
z += len(tmp) - k
else:
n += 1.0
k += 1
z -= m * (m + 1.0) / 2.0
z /= m
z /= n
self.sum_metric += z
self.num_inst += 1
word2vec/lstm on mxnet with NCE lossSoftmax是用來實(shí)現(xiàn)多類分類問題常見的損失函數(shù)。但如果類別特別多簇搅,softmax的效率就是個(gè)問題了黍匾。比如在word2vec里,每個(gè)詞都是一個(gè)類別现喳,在這種情況下可能有100...