使用tensorflow或numpy實(shí)現(xiàn)驗(yàn)證碼識(shí)別猾瘸,有兩個(gè)版本,直接用tensorflow實(shí)現(xiàn)的很簡(jiǎn)單丢习,使用numpy較為復(fù)雜牵触,因?yàn)槿家约簩?shí)現(xiàn)。
import tensorflow as tf
from tensorflow.keras import Sequential, layers
from dataset.captcha.captcha import load_captcha
# 讀入數(shù)據(jù)(24,72,3)
(x_train, t_train), (x_test, t_test) = load_captcha()
x_validation, t_validation = x_test, t_test
# 超參數(shù)
epochs = 100
batch_size = 128
learning_rate = 1e-1
network = Sequential([
layers.Conv2D(12, 3, 1, activation=tf.nn.leaky_relu),
layers.MaxPooling2D(strides=2),
layers.BatchNormalization(),
layers.Conv2D(36, 3, 3, activation=tf.nn.leaky_relu),
layers.BatchNormalization(),
layers.Conv2D(128, (3, 5), (1, 2), activation=tf.nn.leaky_relu),
layers.Flatten(),
layers.BatchNormalization(),
layers.Dense(128 * 2),
layers.BatchNormalization(),
layers.Dense(4 * 36),
layers.Reshape([4, 36])
])
network.build((None, 24, 72, 3))
network.summary()
def loss_func(y_true, y_pred):
loss_ce = tf.losses.MSE(y_true, y_pred)
loss_ce = tf.reduce_mean(loss_ce)
return loss_ce
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
network.compile(optimizer, loss=loss_func, metrics=['accuracy'])
network.fit(x_train, t_train, epochs=epochs, batch_size=batch_size,
validation_data=(x_test, t_test))
network.evaluate(x_test, t_test)
network.save('model.h5')
這里只貼使用tensorflow實(shí)現(xiàn)的代碼咐低,numpy實(shí)現(xiàn)版本可以直接通過(guò)gitee鏈接看
https://gitee.com/MIEAPP/study-ml
數(shù)據(jù)來(lái)源于https://www.kaggle.com/fanbyprinciple/captcha-images