因?yàn)轫?xiàng)目上的需要蚊逢,我需要去訓(xùn)練一個(gè)人臉識(shí)別的系統(tǒng),但是機(jī)器視覺(jué)方向并不是我特別喜歡的方向箫章,所以我特別急功求成烙荷,想盡快搭建一個(gè)人臉識(shí)別系統(tǒng),其實(shí)在git上已經(jīng)有很多相關(guān)論文還有已經(jīng)訓(xùn)練好的模型檬寂,大家如果想去了解這方面的知識(shí)终抽,這篇文章并不適合您看。當(dāng)然桶至,時(shí)間是必須去付出的昼伴,大家如果喜歡這方面的方向,就好好斟酌一下镣屹,神經(jīng)網(wǎng)絡(luò)現(xiàn)在應(yīng)用的很廣圃郊,推薦Tensorflow框架,實(shí)在是太簡(jiǎn)單方便搭建了女蜈。當(dāng)然不要光只學(xué)框架持舆,具體的原理一定要搞清楚,推薦Coursera-Andrew ng-MachineLearing課程鞭光,沒(méi)看過(guò)就相當(dāng)于沒(méi)學(xué)過(guò)吏廉,這是經(jīng)典,而且入門簡(jiǎn)單惰许,好好學(xué)席覆,好好吸收,原理就這樣汹买。好吧佩伤,接下來(lái)我就將這10天的工作總結(jié)在此吧。
【2019/3/11】鑒于文章關(guān)注好像挺多了晦毙,過(guò)2天我會(huì)嘗試將代碼放到github上生巡,我不知道是否能上傳這么大量的圖片。
一见妒,材料準(zhǔn)備
Anaconda3-4.2.0
這是一個(gè)python3.5.2的集成環(huán)境孤荣,特別好用和方便,方便管理需要用的包。
二盐股,學(xué)習(xí)教程
Python3
這個(gè)學(xué)習(xí)教程無(wú)所謂的钱豁,你可以看書,也可以找網(wǎng)站來(lái)自學(xué)疯汁,當(dāng)然必須先有python3的基礎(chǔ)知識(shí)了牲尺。
Tensorflow
這個(gè)我推薦莫煩大牛的基礎(chǔ)Tensorflow視頻教程,不過(guò)這個(gè)課程說(shuō)實(shí)在的是過(guò)于簡(jiǎn)單幌蚊,就是快速的讓人了解整個(gè)過(guò)程谤碳,其實(shí)面對(duì)的還是一些有基礎(chǔ)的人,所以一定要看Coursera-Andrew ng-MachineLearing這課程溢豆,經(jīng)典蜒简!經(jīng)典!經(jīng)典漩仙!重要的東西說(shuō)3遍臭蚁。當(dāng)然學(xué)習(xí)框架的東西最好還是去tensoflow官方(好難打開(kāi))去細(xì)酌,當(dāng)然tensorFlow中文社區(qū)也行讯赏,看自己喜歡,八仙過(guò)海冷尉,各顯神通漱挎。
opencv3
這個(gè)就沒(méi)什么好說(shuō)的了,直接看官方的教程雀哨,一點(diǎn)點(diǎn)敲磕谅,一點(diǎn)點(diǎn)嘗試,這是3.0的教程雾棺,2.0在某些方面上不一樣膊夹,我覺(jué)得可以直接入3.0。當(dāng)然個(gè)人也極度推薦看毛星云大牛博客捌浩,當(dāng)然自己先需要懂c++放刨,然后去搞懂每個(gè)原理就好,更多東西尸饺,還是回歸官網(wǎng)进统,官網(wǎng)說(shuō)的為準(zhǔn)。
三浪听,環(huán)境布置
1.安裝anaconda3
我剛剛給的鏈接是個(gè)exe文件螟碎,所以這我就不用說(shuō)了吧,就是傻瓜式的下一步迹栓,選擇安裝就好掉分。
注意:這里可能有人糾結(jié)選擇什么,我選擇的是這個(gè)。
安裝完成后酥郭,一般所有軟件都會(huì)在這里华坦。
2.安裝opencv3
打開(kāi)開(kāi)始找到Anaconda Prompt,并以管理員身份運(yùn)行褥民,其實(shí)就是個(gè)普通終端罷了季春。
輸入以下命令,然后按下y即可。
conda install -c https://conda.anaconda.org/menpo opencv3
安裝完成后我們可以用以下命令試試是否正常使用消返,沒(méi)反應(yīng)就是最好的反應(yīng)了啊载弄,能正常使用。
python
import cv2
3.安裝tensorflow
同樣地撵颊,打開(kāi)開(kāi)始找到Anaconda Prompt宇攻,并以管理員身份運(yùn)行,輸入以下命令倡勇,這時(shí)候使用的Tensorflow1.3.0
#Anaconda安裝完成后逞刷,打開(kāi)Anaconda Prompt,輸入如下命令妻熊,創(chuàng)建Tensorflow虛擬環(huán)境夸浅。
conda create -n tensorflow python=3.5
#進(jìn)入Tensorflow虛擬環(huán)境
activate tensorflow
退出Tensorflow虛擬環(huán)境
deactivate tensorflow
安裝Tensorflow
pip install tensorflow
親測(cè)可用,完成了之后扔役,同樣可以輸入以下命令來(lái)進(jìn)行測(cè)試帆喇。
python
import tensorflow
用anaconda3就是這么方便,很多東西都集成好在一個(gè)地方亿胸,如果想卸載坯钦,其實(shí)很簡(jiǎn)單,直接把a(bǔ)naconda3卸載了侈玄,什么都脫離了你的環(huán)境了婉刀。就是這么好用。
四序仙,源代碼
ps:下面步驟如果沒(méi)提示到的文件和數(shù)據(jù)包突颊,不需要管,我會(huì)在每一個(gè)文件對(duì)應(yīng)需要下載和安裝什么東西诱桂,一步一步進(jìn)行講述
spyder工具
我們使用的編輯工具叫spyder洋丐,anaconda3自帶工具,在開(kāi)始輸入spyder即可找到挥等。
main.py
首先去opencv官網(wǎng)友绝,下載一個(gè)opencv包,將下面的兩個(gè)文件放入xml文件夾肝劲。
然后可以運(yùn)行以下代碼迁客,當(dāng)然要注意我們的----------------------------ps:位置郭宝,等等我們將到對(duì)應(yīng)位置之后,遍可打開(kāi)掷漱,將得到你渴望的效果粘室。慢慢來(lái),心急吃不到熱豆腐卜范,先試試人臉識(shí)別怎么樣衔统,這是opencv的demo改寫的,就是利用haar_like分類器進(jìn)行分類海雪,有2部判斷锦爵,先判斷是否是人臉,是人臉的話奥裸,判斷人臉有沒(méi)眼睛险掀,如果有這個(gè)就是人臉。當(dāng)然相關(guān)論文知識(shí)我的這篇文章不會(huì)詳細(xì)介紹湾宙,大家可以上網(wǎng)去找樟氢,我只想用最簡(jiǎn)單的方式給大家?guī)?lái)效果
代碼
# -*- coding: utf-8 -*-
"""
Created on Tue Oct 17 10:14:19 2017
@author: Gavinjou
"""
import cv2
import numpy as np
import datetime
#----------------------------ps:講到年齡,性別的時(shí)候可以打開(kāi)
#import age_sex as myahesex
#調(diào)用自己的表情文檔
#----------------------------ps:講到表情識(shí)別的時(shí)候可以打開(kāi)
#import model as mymodel
#調(diào)用自己的headpose文檔
#----------------------------ps:講到頭部姿態(tài)的時(shí)候可以打開(kāi)
#import headpose as myheadpose
#haar人臉識(shí)別分類器數(shù)據(jù)位置
face_cascade_name = "xml/haarcascade_frontalface_alt2.xml"
#眼睛識(shí)別,提高準(zhǔn)確率
eyes_cascade_name = "xml/haarcascade_eye.xml"
#窗口命名
window_name = "Face detection"
#定義人臉識(shí)別分類器
face_cascade = cv2.CascadeClassifier(face_cascade_name)
if face_cascade.empty() :
raise IOError('Unable to load the face cascade classifier xml file')
#定義眼睛檢測(cè)分類器
eyes_cascade = cv2.CascadeClassifier(eyes_cascade_name)
if eyes_cascade.empty() :
raise IOError('Unable to load the eye cascade classifier xml file')
#年齡
age_list=['(0, 2)','(4, 6)','(8, 12)','(15, 20)','(25, 32)','(38, 43)','(48, 53)','(60, 100)']
#性別
gender_list=['Male','Female']
#得到性別識(shí)別器
#----------------------------ps:講到年齡侠鳄,性別的時(shí)候可以打開(kāi)
#age_net=myahesex.get_age_net()
#得到年齡識(shí)別器
#----------------------------ps:講到年齡埠啃,性別的時(shí)候可以打開(kāi)
#gender_net = myahesex.get_gender_net()
#人臉識(shí)別畫框
def detectAndDisplay(frame,scale):
#算法開(kāi)始時(shí)間
startTime = datetime.datetime.now()
#將原圖轉(zhuǎn)化為灰度圖片
frame_gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
#灰度直方圖均衡化
frame_gray = cv2.equalizeHist(frame_gray)
#改變圖像大小,使用雙線性差值
rows, cols = frame_gray.shape
#縮小灰度圖片加速計(jì)算
smallImage = cv2.resize(frame_gray,(int(round(cols/scale)),round(int(rows/scale))),interpolation=cv2.INTER_CUBIC)
#人臉偵測(cè)
faces = face_cascade.detectMultiScale(smallImage,1.1,2,cv2.CASCADE_SCALE_IMAGE,(30, 30))
index=int(1)
for faceRect in faces:
x,y,w,h = faceRect
#左上角
LUpoint = (int(round(x * scale)),int(round(y * scale)))
#右上角
RDpoint = (int(round((x+w-1) * scale)),int(round((y+h-1) * scale)))
#人臉映像
faceROI = frame_gray[int(round(y * scale)):int(round((y+h-1) * scale)), int(round(x * scale)):int(round((x+w-1) * scale))]
#三維人臉映像
faceROI2 = frame[int(round(y * scale)):int(round((y+h-1) * scale)), int(round(x * scale)):int(round((x+w-1) * scale))]
#眼睛識(shí)別
eyes = eyes_cascade.detectMultiScale(faceROI,1.1,2,cv2.CASCADE_SCALE_IMAGE,(30, 30))
if len(eyes) !=2 :
continue
#得到角度參數(shù)
#----------------------------ps:講到頭部姿態(tài)的時(shí)候可以打開(kāi)
#pitch,yaw,roll= myheadpose.predict_head_pose(faceROI2)
#print(pitch,yaw,roll)
#得到性別
#----------------------------ps:講到年齡,性別的時(shí)候可以打開(kāi)
#gender_prediction = gender_net.predict([faceROI2])
#print(gender_list[gender_prediction[0].argmax()])
#得到年齡
#----------------------------ps:講到年齡伟恶,性別的時(shí)候可以打開(kāi)
#age_prediction = age_net.predict([faceROI2])
#print(age_list[age_prediction[0].argmax()])
#得到所有表情參數(shù)
#----------------------------ps:講到表情識(shí)別的時(shí)候可以打開(kāi)
#facemodel=mymodel.predict_emotion(faceROI)
#得到最大值位置
#----------------------------ps:講到表情識(shí)別的時(shí)候可以打開(kāi)
#_positon = np.argmax(facemodel)
#畫正方形
cv2.rectangle(frame,LUpoint,RDpoint,(0, 0, 255),2,8)
#標(biāo)記
cv2.putText(frame,str(index),LUpoint,cv2.FONT_HERSHEY_SIMPLEX,2.0,(0, 0, 255))
#----------------------------ps:講到表情識(shí)別的時(shí)候可以打開(kāi)
#cv2.putText(frame,mymodel.emotion_labels[_positon],LUpoint,cv2.FONT_HERSHEY_SIMPLEX,1.0,(0, 0, 255))
index+=1
cv2.imshow(window_name,frame)
#算法結(jié)束時(shí)間
endTime = datetime.datetime.now()
print (endTime - startTime)
#---detectAndDisplay
#main
#初始化窗口
cv2.namedWindow(window_name,cv2.WINDOW_NORMAL)
capture = cv2.VideoCapture(0)
while(capture.isOpened()):
ret, frame = capture.read()
#判斷是否最后一幀
if ret:
detectAndDisplay(frame,2.0)
#按q退出程序
if cv2.waitKey(30) & 0xFF == ord('q'):
break
#釋放視頻
capture.release()
cv2.destroyAllWindows()
效果圖
model.py
這是我參考的兩篇文章霸妹,第一篇是參考代碼文章,第二篇是作者的文章知押,第三篇是安裝keras教程,這是訓(xùn)練好的結(jié)果鹃骂,所以可以直接調(diào)用台盯,這時(shí)我測(cè)試部分就不寫了,直接使用參考代碼文章代碼即可測(cè)試畏线。
http://blog.csdn.net/sinat_26917383/article/details/72885715
https://github.com/JostineHo/mememoji
http://blog.csdn.net/shenziheng1/article/details/69664920
這時(shí)需要將作者github上的文件夾下載静盅,并放置剛剛的目錄結(jié)構(gòu)中。利用以下代碼即可測(cè)試
安裝keras
具體請(qǐng)按照這篇文章進(jìn)行操作
代碼
# -*- coding: utf-8 -*-
"""
Created on Wed Oct 18 15:36:48 2017
@author: Gavinjou
"""
import cv2
import sys
import json
import time
import numpy as np
from keras.models import model_from_json
root_model="real-time_emotion_analyzer-master"
#動(dòng)作表情
#憤怒寝殴,害怕蒿叠,開(kāi)心,傷心蚣常,驚喜市咽,平靜
emotion_labels = ['angry', 'fear', 'happy', 'sad', 'surprise', 'neutral']
# load json and create model arch
json_file = open(root_model+'/model.json','r')
loaded_model_json = json_file.read()
json_file.close()
print("加載keras模型成功")
model = model_from_json(loaded_model_json)
# load weights into new model
model.load_weights(root_model+'/model.h5')
print("加載權(quán)重成功")
#定義預(yù)測(cè)函數(shù)
def predict_emotion(face_image_gray):
resized_img = cv2.resize(face_image_gray, (48,48), interpolation = cv2.INTER_AREA)
# cv2.imwrite(str(index)+'.png', resized_img)
image = resized_img.reshape(1, 1, 48, 48)
list_of_list = model.predict(image, batch_size=1, verbose=1)
angry, fear, happy, sad, surprise, neutral = [prob for lst in list_of_list for prob in lst]
return [angry, fear, happy, sad, surprise, neutral]
#img_gray = cv2.imread('C:/Users/Gavinjou/Desktop/FaceRecognation/real-time_emotion_analyzer-master/meme_faces/happy-fear.png')
#img_gray = cv2.cvtColor(img_gray, cv2.COLOR_BGR2GRAY)
#angry, fear, happy, sad, surprise, neutral = predict_emotion(img_gray)
效果圖
這時(shí)候還記得我們main.py,有----------------------------ps:標(biāo)記嗎抵蚊,將----------------------------ps:講到表情識(shí)別的時(shí)候可以打開(kāi)下面的語(yǔ)句全部打開(kāi)施绎,然后運(yùn)行main.py
headpose.py
頭部姿態(tài)識(shí)別我是按照這篇文章來(lái)進(jìn)行使用的溯革,這是原版的作者的文章,其實(shí)人家寫的真夠詳細(xì)了谷醉,各種demo都告訴你怎么用了致稀,直接調(diào)用就好,人家模型都是訓(xùn)練好的了俱尼,直接用就好抖单。
這時(shí)需要將作者github上的文件夾下載,并放置剛剛的目錄結(jié)構(gòu)中遇八。利用以下代碼即可測(cè)試矛绘,記得把文件夾名字改成我這個(gè),不過(guò)也無(wú)所謂啦押蚤,就是個(gè)路徑問(wèn)題蔑歌,當(dāng)然自己去修改一下代碼路徑也是沒(méi)問(wèn)題的。測(cè)試代碼我就不講了揽碘,作者的文章上清清楚楚寫了demo次屠,自己寫一遍測(cè)試一下即可。
安裝dlib
打開(kāi)Anaconda Prompt雳刺,輸入如下命令,安裝dlib
conda install -c conda-forge dlib=19.4
注意: 這是我折騰最久的劫灶,我不知道你是否能安裝上,我是參考了幾篇文章都無(wú)法裝上掖桦,然后不知道搜了哪個(gè)位置的文章本昏,使用一條命令就把dlib裝上了。如果不行的話枪汪,我也推薦我之前參考的文章的鏈接去試試涌穆,但我并沒(méi)有成功,總說(shuō)什么Unicode不對(duì)雀久。然后就放棄了宿稀。我用的材料是boost1.57.0,cmake3.8.2赖捌,dlib19.4.0
http://blog.csdn.net/insanity666/article/details/72235275
http://www.reibang.com/p/004c99828af2
代碼
# -*- coding: utf-8 -*-
"""
Created on Wed Oct 18 20:10:15 2017
@author: Gavinjou
"""
import tensorflow as tf
from deepgazemaster.deepgaze.head_pose_estimation import CnnHeadPoseEstimator
#圖像大小設(shè)定
width = 64
height= 64
sess = tf.Session()
my_head_pose_estimator = CnnHeadPoseEstimator(sess)
my_head_pose_estimator.load_pitch_variables("deepgazemaster/etc/tensorflow/head_pose/pitch/cnn_cccdd_30k.tf")
my_head_pose_estimator.load_yaw_variables("deepgazemaster/etc/tensorflow/head_pose/yaw/cnn_cccdd_30k")
my_head_pose_estimator.load_roll_variables("deepgazemaster/etc/tensorflow/head_pose/roll/cnn_cccdd_30k.tf")
#輸入的圖像大小必須要相等(64>=x,64>=x,3),x代表輸入
def predict_head_pose (face_image):
#resized_img = cv2.resize(face_image, (width,height), interpolation = cv2.INTER_AREA)
pitch = my_head_pose_estimator.return_pitch(face_image)
yaw = my_head_pose_estimator.return_yaw(face_image)
roll = my_head_pose_estimator.return_roll(face_image)
return [pitch,yaw,roll]
效果圖
這時(shí)候還記得我們main.py祝沸,有----------------------------ps:標(biāo)記嗎,將----------------------------ps:講到頭部姿態(tài)的時(shí)候可以打開(kāi)下面的語(yǔ)句全部打開(kāi)越庇,然后運(yùn)行main.py
這時(shí)我多加了一個(gè)語(yǔ)句
輸出便如下圖了
age_sex.py
年齡罩锐,性別識(shí)別部分我是參照我是參考該第一篇作者的文章,利用的是caffe神經(jīng)網(wǎng)絡(luò)框架卤唉,不過(guò)新的caffe好像還是會(huì)出問(wèn)題涩惑,所以我還是會(huì)一步步讓大家運(yùn)行我的整份代碼,第二篇修改caffe的文章桑驱。不用緊張境氢,我們繼續(xù)一步步來(lái)蟀拷。
https://github.com/GilLevi/AgeGenderDeepLearning
http://blog.csdn.net/gzljss/article/details/45849013
這是需要將作者的整份代碼下載下來(lái),放文件根目錄下
這時(shí)候還沒(méi)完成呢萍聊,還需要作者已經(jīng)訓(xùn)練好的模型放到這個(gè)master文件中问芬。這個(gè)文章還是原作者模型的參考文章,有時(shí)間還是好好看看寿桨。
我下載的是作者最原始的訓(xùn)練好的模型此衅,下完完成之后解壓,創(chuàng)建一個(gè)文件名為cnn_age_gender_models的文件亭螟,把解壓縮文件全部放進(jìn)去挡鞍。
安裝caffe
網(wǎng)上很少有anaconda3+python3.5的caffe安裝,就算有预烙,也是特別麻煩墨微,還有各種編譯的煩事,還不一定成功扁掸,得謝謝第二篇文章上知乎上朋友的回答翘县,2個(gè)回答的朋友已經(jīng)告訴你怎么把caffe放到anaconda3中調(diào)用了,當(dāng)然如果你不想編譯谴分,真得謝天謝地锈麸。第一篇文章已經(jīng)提供了python35的預(yù)編譯版本,開(kāi)心吧牺蹄。接下來(lái)我再來(lái)一步一步的說(shuō)該怎么裝
https://github.com/BVLC/caffe/tree/windows
https://www.zhihu.com/question/34119328
首先打開(kāi)第一篇文章忘伞,下載caffe到我們程序根目錄當(dāng)中
因?yàn)榇a中并不需要添加到anaconda3環(huán)境當(dāng)中所以我就不需要演示怎么放進(jìn)環(huán)境中了,其實(shí)代碼已經(jīng)聲明路徑添加上去罷了沙兰。
修改caffe
參考文章
打開(kāi)caffe/python/caffe/io.py 第258行氓奈,修改成以下代碼
if ms != self.inputs[in_][1:]:
in_shape = self.inputs[in_][1:]
m_min, m_max = mean.min(), mean.max()
normal_mean = (mean - m_min) / (m_max - m_min)
mean = resize_image(normal_mean.transpose((1,2,0)),in_shape[1:]).transpose((2,0,1)) * (m_max - m_min) + m_min
#raise ValueError('Mean shape incompatible with input shape.')
打開(kāi)caffe/python/caffe/classifier.py 第96行,修改成以下代碼
代碼
import os
import numpy as np
import matplotlib.pyplot as plt
caffe_root = './caffe/'
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
plt.rcParams['figure.figsize'] = (10, 10)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
mean_filename='./AgeGenderDeepLearning-master/cnn_age_gender_models/mean.binaryproto'
proto_data = open(mean_filename, "rb").read()
a = caffe.io.caffe_pb2.BlobProto.FromString(proto_data)
mean = caffe.io.blobproto_to_array(a)[0]
"""
age_net_pretrained='./AgeGenderDeepLearning-master/cnn_age_gender_models/age_net.caffemodel'
age_net_model_file='./AgeGenderDeepLearning-master/cnn_age_gender_models/deploy_age.prototxt'
age_net = caffe.Classifier(age_net_model_file, age_net_pretrained,
mean=mean,
channel_swap=(2,1,0),
raw_scale=255,
image_dims=(256, 256))
gender_net_pretrained='./AgeGenderDeepLearning-master/cnn_age_gender_models/gender_net.caffemodel'
gender_net_model_file='./AgeGenderDeepLearning-master/cnn_age_gender_models/deploy_gender.prototxt'
gender_net = caffe.Classifier(gender_net_model_file, gender_net_pretrained,
mean=mean,
channel_swap=(2,1,0),
raw_scale=255,
image_dims=(256, 256))
"""
def get_age_net():
age_net_pretrained='./AgeGenderDeepLearning-master/cnn_age_gender_models/age_net.caffemodel'
age_net_model_file='./AgeGenderDeepLearning-master/cnn_age_gender_models/deploy_age.prototxt'
age_net = caffe.Classifier(age_net_model_file, age_net_pretrained,
mean=mean,
channel_swap=(2,1,0),
raw_scale=255,
image_dims=(256, 256))
return age_net
def get_gender_net():
gender_net_pretrained='./AgeGenderDeepLearning-master/cnn_age_gender_models/gender_net.caffemodel'
gender_net_model_file='./AgeGenderDeepLearning-master/cnn_age_gender_models/deploy_gender.prototxt'
gender_net = caffe.Classifier(gender_net_model_file, gender_net_pretrained,
mean=mean,
channel_swap=(2,1,0),
raw_scale=255,
image_dims=(256, 256))
return gender_net
"""
gender_net = get_gender_net()
age_list=['(0, 2)','(4, 6)','(8, 12)','(15, 20)','(25, 32)','(38, 43)','(48, 53)','(60, 100)']
gender_list=['Male','Female']
example_image = './AgeGenderDeepLearning-master/cnn_age_gender_models/example_image.jpg'
input_image = caffe.io.load_image(example_image)
print(input_image.shape)
_ = plt.imshow(input_image)
prediction = gender_net.predict([input_image])
print ('predicted gender:', gender_list[prediction[0].argmax()])
"""
效果圖
這時(shí)候還記得我們main.py鼎天,有----------------------------ps:標(biāo)記嗎探颈,將#----------------------------ps:講到年齡,性別的時(shí)候可以打開(kāi)下面的語(yǔ)句全部打開(kāi)训措,然后運(yùn)行main.py
可以看到性別是Female,年齡在(38-43)區(qū)間。光羞。 我曹绩鸣,我是女的,還那么老纱兑。呀闻。
五,總結(jié)
上面都是很基礎(chǔ)的東西潜慎,只是個(gè)亂調(diào)用捡多,但是很快就做出模型來(lái)蓖康,可以滿足一下小心臟,當(dāng)然垒手,如果要自己去研究這東西蒜焊,這是最好的,我是沒(méi)什么心思搞這個(gè)方向科贬,所以我更想快點(diǎn)能調(diào)用來(lái)使用泳梆。但是這個(gè)程序速度跑起來(lái)有點(diǎn)慢,如果大家有什么好建議的話榜掌,可以留言給我优妙,如果在配置上還出了些什么問(wèn)題,也可以留言憎账,我基本每天都上一下簡(jiǎn)書的套硼。
最后還有一份性別,年齡代碼胞皱。我是參考了這篇文章的代碼邪意,可是訓(xùn)練出來(lái)的模型很有問(wèn)題,梯度一直沒(méi)下降朴恳,而且調(diào)用的時(shí)候也各種出狀況抄罕,我跑了2天的數(shù)據(jù),一點(diǎn)卵用也沒(méi)于颖,待我好好看看TensorFlow呆贿,我再整理一下,先保留著森渐。
import os
import glob
import tensorflow as tf
from tensorflow.contrib.layers import *
from tensorflow.contrib.slim.python.slim.nets.inception_v3 import inception_v3_base
import numpy as np
from random import shuffle
import datetime
#年齡區(qū)間
age_table=['(0, 2)','(4, 6)','(8, 12)','(15, 20)','(25, 32)','(38, 43)','(48, 53)','(60, 100)']
#性別
sex_table=['f','m'] # f:女; m:男
# AGE==True 訓(xùn)練年齡模型做入,F(xiàn)alse,訓(xùn)練性別模型
AGE = False
if AGE == True:
#獲取長(zhǎng)度
lables_size = len(age_table) # 年齡
else:
#獲取長(zhǎng)度
lables_size = len(sex_table) # 性別
face_set_fold = 'AdienceBenchmarkOfUnfilteredFacesForGenderAndAgeClassification'
#拼接路徑
fold_0_data = os.path.join(face_set_fold, 'fold_0_data.txt')
fold_1_data = os.path.join(face_set_fold, 'fold_1_data.txt')
fold_2_data = os.path.join(face_set_fold, 'fold_2_data.txt')
fold_3_data = os.path.join(face_set_fold, 'fold_3_data.txt')
fold_4_data = os.path.join(face_set_fold, 'fold_4_data.txt')
face_image_set = os.path.join(face_set_fold, 'aligned')
#拼接路徑
def parse_data(fold_x_data):
#數(shù)據(jù)集存儲(chǔ)
data_set = []
with open(fold_x_data, 'r') as f:
#用于標(biāo)記第一行,第一行數(shù)據(jù)全部是名稱同衣,全部不讀
line_one = True
for line in f:
tmp = []
#如果是第一行竟块,繼續(xù)
if line_one == True:
line_one = False
continue
#獲取所在文件編號(hào)
tmp.append(line.split('\t')[0])
#獲取對(duì)應(yīng)圖片名稱
tmp.append(line.split('\t')[1])
#獲取年齡區(qū)間
tmp.append(line.split('\t')[3])
#獲取性別
tmp.append(line.split('\t')[4])
#查看對(duì)應(yīng)文件夾是否存在
file_path = os.path.join(face_image_set, tmp[0])
#如果存在
if os.path.exists(file_path):
#獲取該文件所有圖片
filenames = glob.glob(file_path + "/*.jpg")
#查找圖片是否在這批文件中
for filename in filenames:
if tmp[1] in filename:
break
#將數(shù)據(jù)掛載到內(nèi)存
if AGE == True:
if tmp[2] in age_table:
data_set.append([filename, age_table.index(tmp[2])])
else:
if tmp[3] in sex_table:
data_set.append([filename, sex_table.index(tmp[3])])
#返回?cái)?shù)據(jù)集
return data_set
"""
#------讀取數(shù)據(jù)
startTime = datetime.datetime.now()
#讀取所有文件的數(shù)據(jù)集
data_set_0 = parse_data(fold_0_data)
data_set_1 = parse_data(fold_1_data)
data_set_2 = parse_data(fold_2_data)
data_set_3 = parse_data(fold_3_data)
data_set_4 = parse_data(fold_4_data)
#合并所有數(shù)據(jù)
data_set = data_set_0 + data_set_1 + data_set_2 + data_set_3 + data_set_4
#打亂數(shù)據(jù)
shuffle(data_set)
endTime = datetime.datetime.now()
print ("完成讀取數(shù)據(jù)時(shí)間:"+str(endTime - startTime))
#------讀取數(shù)據(jù)
"""
# 縮放圖像的大小
IMAGE_HEIGHT = 227
IMAGE_WIDTH = 227
# 讀取縮放圖像
#待放入字符串
jpg_data = tf.placeholder(dtype=tf.string)
#待解碼jpg圖片
decode_jpg = tf.image.decode_jpeg(jpg_data, channels=3)
#對(duì)待讀取圖片重置size
resize = tf.image.resize_images(decode_jpg, [IMAGE_HEIGHT, IMAGE_WIDTH])
#優(yōu)化轉(zhuǎn)換
resize = tf.cast(resize, tf.uint8) / 255
#讀取圖片并重置圖片數(shù)據(jù)
def resize_image(file_name):
#讀取圖片
with tf.gfile.FastGFile(file_name, 'rb') as f:
image_data = f.read()
#加載程序
with tf.Session() as sess:
image = sess.run(resize, feed_dict={jpg_data: image_data})
return image
#批量數(shù)據(jù)處理
pointer = 0
def get_next_batch(data_set, batch_size=128):
global pointer
batch_x = []
batch_y = []
for i in range(batch_size):
batch_x.append(resize_image(data_set[pointer][0]))
batch_y.append(data_set[pointer][1])
pointer += 1
return batch_x, batch_y
#分批大小
#batch_size = 128
batch_size = 1
#總個(gè)數(shù)
#num_batch = len(data_set) // batch_size
num_batch = 1
print("總共的batch數(shù)量---"+str(num_batch))
#輸入的數(shù)據(jù)大小
X = tf.placeholder(dtype=tf.float32, shape=[batch_size, IMAGE_HEIGHT, IMAGE_WIDTH, 3])
#輸出數(shù)據(jù)大小
Y = tf.placeholder(dtype=tf.int32, shape=[batch_size])
def conv_net(nlabels, images, pkeep=1.0):
weights_regularizer = tf.contrib.layers.l2_regularizer(0.0005)
with tf.variable_scope("conv_net", "conv_net", [images],reuse=True) as scope:
with tf.contrib.slim.arg_scope([convolution2d, fully_connected], weights_regularizer=weights_regularizer, biases_initializer=tf.constant_initializer(1.), weights_initializer=tf.random_normal_initializer(stddev=0.005), trainable=True):
with tf.contrib.slim.arg_scope([convolution2d], weights_initializer=tf.random_normal_initializer(stddev=0.01)):
conv1 = convolution2d(images, 96, [7,7], [4, 4], padding='VALID', biases_initializer=tf.constant_initializer(0.), scope='conv1')
pool1 = max_pool2d(conv1, 3, 2, padding='VALID', scope='pool1')
norm1 = tf.nn.local_response_normalization(pool1, 5, alpha=0.0001, beta=0.75, name='norm1')
conv2 = convolution2d(norm1, 256, [5, 5], [1, 1], padding='SAME', scope='conv2')
pool2 = max_pool2d(conv2, 3, 2, padding='VALID', scope='pool2')
norm2 = tf.nn.local_response_normalization(pool2, 5, alpha=0.0001, beta=0.75, name='norm2')
conv3 = convolution2d(norm2, 384, [3, 3], [1, 1], biases_initializer=tf.constant_initializer(0.), padding='SAME', scope='conv3')
pool3 = max_pool2d(conv3, 3, 2, padding='VALID', scope='pool3')
flat = tf.reshape(pool3, [-1, 384*6*6], name='reshape')
full1 = fully_connected(flat, 512, scope='full1')
drop1 = tf.nn.dropout(full1, pkeep, name='drop1')
full2 = fully_connected(drop1, 512, scope='full2')
drop2 = tf.nn.dropout(full2, pkeep, name='drop2')
with tf.variable_scope('output',reuse=True) as scope:
weights = tf.Variable(tf.random_normal([512, nlabels], mean=0.0, stddev=0.01), name='weights')
biases = tf.Variable(tf.constant(0.0, shape=[nlabels], dtype=tf.float32), name='biases')
output = tf.add(tf.matmul(drop2, weights), biases, name=scope.name)
return output
"""
def training():
logits = conv_net(lables_size, X)
def optimizer(eta, loss_fn):
global_step = tf.Variable(0, trainable=False)
optz = lambda lr: tf.train.MomentumOptimizer(lr, 0.9)
lr_decay_fn = lambda lr,global_step : tf.train.exponential_decay(lr, global_step, 100, 0.97, staircase=True)
return tf.contrib.layers.optimize_loss(loss_fn, global_step, eta, optz, clip_gradients=4., learning_rate_decay_fn=lr_decay_fn)
def loss(logits, labels):
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits = logits, labels = labels)
cross_entropy_mean = tf.reduce_mean(cross_entropy)
regularization_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
total_loss = cross_entropy_mean + 0.01 * sum(regularization_losses)
loss_averages = tf.train.ExponentialMovingAverage(0.9)
loss_averages_op = loss_averages.apply([cross_entropy_mean] + [total_loss])
with tf.control_dependencies([loss_averages_op]):
total_loss = tf.identity(total_loss)
return total_loss
# loss
total_loss = loss(logits, Y)
# optimizer
train_op = optimizer(0.001, total_loss)
saver = tf.train.Saver(tf.global_variables())
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
global pointer
epoch = 0
while True:
print("start-----"+str(epoch))
pointer = 0
for batch in range(num_batch):
startTime = datetime.datetime.now()
batch_x, batch_y = get_next_batch(data_set, batch_size)
_, loss_value = sess.run([train_op, total_loss], feed_dict={X:batch_x, Y:batch_y})
print(epoch, batch, loss_value)
endTime = datetime.datetime.now()
print ("一次batch時(shí)間訓(xùn)練:"+str(endTime - startTime))
saver.save(sess, './age.ckpt' if AGE == True else './sex.ckpt')
epoch += 1
print("end-----"+str(epoch))
training()
"""
# 檢測(cè)性別和年齡
# 把batch_size改為1
def detect_age_or_sex(image_path):
logits = conv_net(lables_size, X)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, './age.ckpt' if AGE == True else './sex.ckpt')
softmax_output = tf.nn.softmax(logits)
res = sess.run(softmax_output, feed_dict={X:[resize_image(image_path)]})
res = np.argmax(res)
if AGE == True:
return age_table[res]
else:
return sex_table[res]
print(detect_age_or_sex("1.jpg"))