Tensorflow Object Detection API是Tensorflow官方發(fā)布的一個(gè)建立在TensorFlow之上的開源框架,可以輕松構(gòu)建匹层,訓(xùn)練和部署對(duì)象檢測(cè)模型。TensorFlow官方使用TensorFlow Slim項(xiàng)目框?qū)崿F(xiàn)了近年來提出的多種優(yōu)秀的深度卷積神經(jīng)網(wǎng)絡(luò)框架仲器。
Tensorflow Object Detection API可以選擇的模型:
- Single Shot Multibox Detector (SSD) with MobileNet,
- SSD with Inception V2,
- Region-Based Fully Convolutional Networks (R-FCN) with Resnet 101,
- Faster RCNN with Resnet 101,
- Faster RCNN with Inception Resnet v2
Github:https://github.com/tensorflow/models/tree/master/object_detection
在本文中蝶糯,我們實(shí)現(xiàn)了在Windows環(huán)境下運(yùn)行該框架的流程。在此之前我們要使用相關(guān)的卷積模型识虚,需要自行編譯作者指定的Caffe担锤,不同的框架使用的Caffe版本也不盡相同肛循。而基于其他深度學(xué)習(xí)框架的代碼受制于作者水平的不同银择,可用性與效率也不盡相同夹孔,因此TOD API在Tensorflow上提供了了一套標(biāo)準(zhǔn)化的編寫模式,既有利于使用怜俐,也有為編寫其他模型提供了例子佑菩。
環(huán)境
- Windows 10
- Python 3.6
- Tensorflow-gpu 1.2
- CUDA Toolkit 8與 cuDNN v5
首先我們安裝Tensorflow殿漠,最新的版本為1.2。在python 3.5+使用Tensorflow非常的簡(jiǎn)單蕾哟,不需要過多的流程莲蜘,只需要使用pip進(jìn)行安裝逐哈,所有相關(guān)的依賴就會(huì)自動(dòng)安裝完成问顷。
# For CPU
pip install tensorflow
# For GPU
pip install tensorflow-gpu
其次官方要求下列包,我們一同使用pip進(jìn)行安裝杜窄。
pip install pillow
pip install lxml
pip install jupyter
pip install matplotlib
Tensorflow Object Detection API使用Protobufs來配置模型和訓(xùn)練參數(shù)蚀腿。 在使用框架之前莉钙,必須編譯Protobuf庫胆胰。對(duì)于protobuf蜀涨,在Linux下我們可以使用apt-get安裝厚柳,在Windows下我們可以直接下載已經(jīng)編譯好的版本别垮,這里我們選擇下載列表中的protoc-3.3.0-win32.zip碳想。
Github:https://github.com/google/protobuf/releases
我們將bin文件夾加入到環(huán)境變量中胧奔,然后在CMD執(zhí)行protco命令胳泉,可以看到protobuf要求輸入文件岩遗。
接下來我們切換到models目錄下案铺,使用protoc命令編譯.proto文件
# From tensorflow/models/
protoc object_detection/protos/*.proto --python_out=.
我們可以看見.proto文件已經(jīng)被編譯為了.py文件红且。
官方提供了一個(gè)object_detection_tutorial.ipynb文件,這個(gè)Demo會(huì)自動(dòng)下載并執(zhí)行最小最快的模型Single Shot Multibox Detector (SSD) with MobileNet涤姊。檢測(cè)結(jié)果如下:
為了方便在項(xiàng)目中使用,我們重寫了一個(gè)Python文件嗤放,其中網(wǎng)絡(luò)模型可以從下面的地址下載思喊,每一個(gè)模型都有一個(gè)frozen_inference_graph.pb文件。代碼與運(yùn)行結(jié)果如下:
Tensorflow detection model:
https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md
# coding:utf8
import os
import sys
import cv2
import numpy as np
import tensorflow as tf
sys.path.append("..")
from utils import label_map_util
from utils import visualization_utils as vis_util
class TOD(object):
def __init__(self):
# Path to frozen detection graph. This is the actual model that is used for the object detection.
self.PATH_TO_CKPT = 'frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
self.PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
self.NUM_CLASSES = 90
self.detection_graph = self._load_model()
self.category_index = self._load_label_map()
def _load_model(self):
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(self.PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
return detection_graph
def _load_label_map(self):
label_map = label_map_util.load_labelmap(self.PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=self.NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
return category_index
def detect(self, image):
with self.detection_graph.as_default():
with tf.Session(graph=self.detection_graph) as sess:
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image, axis=0)
image_tensor = self.detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
scores = self.detection_graph.get_tensor_by_name('detection_scores:0')
classes = self.detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = self.detection_graph.get_tensor_by_name('num_detections:0')
# Actual detection.
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
self.category_index,
use_normalized_coordinates=True,
line_thickness=8)
while True:
cv2.namedWindow("detection", cv2.WINDOW_NORMAL)
cv2.imshow("detection", image)
if cv2.waitKey(110) & 0xff == 27:
break
if __name__ == '__main__':
image = cv2.imread('dog.jpg')
detecotr = TOD()
detecotr.detect(image)