背景
最近公司的iOS技術(shù)分享知識都多多少少和Vision有點(diǎn)關(guān)系悍手,所以打算來學(xué)習(xí)一下蒂破。簡單的了解之后扒接,發(fā)現(xiàn)Vision是一個功能很強(qiáng)大的框架精钮。
Vision應(yīng)用場景
- 人臉檢測
- 圖像對比分析
- 二維碼/條形碼檢測
- 文字檢測
- 目標(biāo)跟蹤
說到識別檢測恕齐,除了Vision框架之外乞娄,Apple還提供了另外兩個框架可以實現(xiàn):
CoreImage
CoreImage是iOS5新增的強(qiáng)大類庫之一,它可以將靜止的圖像和視頻圖像提供近實時的處理显歧。
詳情可查看iOS 基于CIDetector的靜態(tài)人臉檢測
對此蘋果還提供了一份性能對比圖:
從蘋果給的對比圖可知相對于已有框架Core Image和AVFoundation仪或,Vision的準(zhǔn)確度是最好的,同時和Core Image支持平臺數(shù)量是一樣的追迟,但是需要較多的處理時間及電源消耗溶其。Vision又是蘋果封裝過的庫,相較于Core Image這種底層庫敦间,API會友好太多瓶逃,減少了我們的開發(fā)量。
Vision體系結(jié)構(gòu)中的重要成員簡介
1.RequestHandler
- VNImageBasedRequest
An object that processes one or more image analysis requests pertaining to a single image.
處理與單個圖像有關(guān)的一個或多個圖像分析請求的對象
- VNSequenceRequestHandler
An object that processes image analysis requests for each frame in a sequence.
處理序列中每個幀的圖像分析請求的對象廓块。
2. VNRequest
- VNImageBasedRequest
The abstract superclass for image analysis requests that focus on a specific part of an image.
用于圖像分析的抽象超類請求關(guān)注圖像的特定部分厢绝。
3.VNObservation
The abstract superclass for analysis results.
分析結(jié)果的抽象超類。
Vision的使用流程
1带猴、給我們需求的Request提供相應(yīng)的RequestHandler
2昔汉、RequestHandler需要持有需要識別的圖片信息,并將結(jié)果分發(fā)給每個Request的completionHandler中
3拴清、可以從results屬性中得到Observation數(shù)組
4靶病、Observation數(shù)組中的內(nèi)容會根據(jù)不同的Request返回不同的Observation
5、每個Observation有boundingBox等屬性口予,存儲的是識別到的相應(yīng)特征的坐標(biāo)
6娄周、我們拿到坐標(biāo)之后就可以為所欲為了
大致用圖片整理表示為:
假如現(xiàn)在有個需求需要對一張圖片我們需要做標(biāo)記出里面的人臉、矩形沪停、二維碼煤辨、文字官方demo
VNImageRequestHandler和VNSequenceRequestHandler提供的識別方法可傳入一個[VNRequest]
//VNImageRequestHandler
public init(cvPixelBuffer pixelBuffer: CVPixelBuffer, options: [VNImageOption : Any] = [:])
public init(cvPixelBuffer pixelBuffer: CVPixelBuffer, orientation: CGImagePropertyOrientation, options: [VNImageOption : Any] = [:])
public init(cgImage image: CGImage, options: [VNImageOption : Any] = [:])
public init(cgImage image: CGImage, orientation: CGImagePropertyOrientation, options: [VNImageOption : Any] = [:])
public init(ciImage image: CIImage, options: [VNImageOption : Any] = [:])
public init(ciImage image: CIImage, orientation: CGImagePropertyOrientation, options: [VNImageOption : Any] = [:])
public init(url imageURL: URL, options: [VNImageOption : Any] = [:])
public init(url imageURL: URL, orientation: CGImagePropertyOrientation, options: [VNImageOption : Any] = [:])
public init(data imageData: Data, options: [VNImageOption : Any] = [:])
public init(data imageData: Data, orientation: CGImagePropertyOrientation, options: [VNImageOption : Any] = [:])
open func perform(_ requests: [VNRequest]) throws
//VNSequenceRequestHandler
open func perform(_ requests: [VNRequest], on pixelBuffer: CVPixelBuffer) throws
open func perform(_ requests: [VNRequest], on pixelBuffer: CVPixelBuffer, orientation: CGImagePropertyOrientation) throws
open func perform(_ requests: [VNRequest], on image: CGImage) throws
open func perform(_ requests: [VNRequest], on image: CGImage, orientation: CGImagePropertyOrientation) throws
open func perform(_ requests: [VNRequest], on image: CIImage) throws
open func perform(_ requests: [VNRequest], on image: CIImage, orientation: CGImagePropertyOrientation) throws
open func perform(_ requests: [VNRequest], onImageURL imageURL: URL) throws
open func perform(_ requests: [VNRequest], onImageURL imageURL: URL, orientation: CGImagePropertyOrientation) throws
open func perform(_ requests: [VNRequest], onImageData imageData: Data) throws
open func perform(_ requests: [VNRequest], onImageData imageData: Data, orientation: CGImagePropertyOrientation) throws
所以我們在查詢Vision之前創(chuàng)建所有請求,將他們捆綁在請求數(shù)組中,并在一次調(diào)用中提交該數(shù)組.Vision運(yùn)行每個請求并在其自己的線程上執(zhí)行其完成處理程序。CoreImage的話需要初始化4個CIDetector分別將識別類型設(shè)置為:CIDetectorTypeFace木张、CIDetectorTypeRectangle众辨、CIDetectorTypeQRCode、CIDetectorTypeText.相當(dāng)于對一張圖片會進(jìn)行4次計算.比較浪費(fèi)資源.
從上面的也可以看出Vision的識別具有方向性.如果識別的方法和圖片的方向不一致,Vision可能就無法正確檢測出我們想要的特征.
另外查看以上兩個類的初始化方法舷礼,可知Vision支持的初始化圖片數(shù)據(jù)類型:
- CVPixelBuffer
- CGImage
- CIImage
- URL
- Data
關(guān)于人臉檢測,CoreImage可識別的有:
詳情可看
open class CIFaceFeature : CIFeature {
open var bounds: CGRect { get }
open var hasLeftEyePosition: Bool { get }
open var leftEyePosition: CGPoint { get }
open var hasRightEyePosition: Bool { get }
open var rightEyePosition: CGPoint { get }
open var hasMouthPosition: Bool { get }
open var mouthPosition: CGPoint { get }
open var hasTrackingID: Bool { get }
open var trackingID: Int32 { get }
open var hasTrackingFrameCount: Bool { get }
open var trackingFrameCount: Int32 { get }
open var hasFaceAngle: Bool { get }
open var faceAngle: Float { get }
open var hasSmile: Bool { get }
open var leftEyeClosed: Bool { get }
open var rightEyeClosed: Bool { get }
}
Vision可識別的有:
open var boundingBox: CGRect { get }
open var landmarks: VNFaceLandmarks2D? { get }
open var roll: NSNumber? { get }
open var yaw: NSNumber? { get }
open class VNFaceLandmarks2D : VNFaceLandmarks {
open var allPoints: VNFaceLandmarkRegion2D? { get }
open var faceContour: VNFaceLandmarkRegion2D? { get }//從左臉頰到下巴到右臉頰的面部輪廓的點(diǎn)的區(qū)域
open var leftEye: VNFaceLandmarkRegion2D? { get }//左眼輪廓
open var rightEye: VNFaceLandmarkRegion2D? { get }//右眼輪廓
open var leftEyebrow: VNFaceLandmarkRegion2D? { get }//左邊眉毛的輪廓
open var rightEyebrow: VNFaceLandmarkRegion2D? { get }//右邊眉毛的輪廓
open var nose: VNFaceLandmarkRegion2D? { get }//鼻子的輪廓
open var noseCrest: VNFaceLandmarkRegion2D? { get }//鼻子中央嵴痕跡的點(diǎn)的區(qū)域鹃彻。
open var medianLine: VNFaceLandmarkRegion2D? { get }//臉中心線軌跡的點(diǎn)的區(qū)域
open var outerLips: VNFaceLandmarkRegion2D? { get }//外嘴唇的輪廓
open var innerLips: VNFaceLandmarkRegion2D? { get }//內(nèi)嘴唇的輪廓
open var leftPupil: VNFaceLandmarkRegion2D? { get }//左邊瞳孔的輪廓
open var rightPupil: VNFaceLandmarkRegion2D? { get }//右邊瞳孔的輪廓
}
open class VNFaceLandmarkRegion2D : VNFaceLandmarkRegion {
open var __normalizedPoints: UnsafePointer<CGPoint> { get } //某一部位所有的像素點(diǎn)
open func __pointsInImage(imageSize: CGSize) -> UnsafePointer<CGPoint>//某一部位的所有像素點(diǎn)的個數(shù)
}
舉例:
識別圖片,并繪制一個矩形去標(biāo)示我們的人臉.
lazy var faceDetectionRequest = VNDetectFaceRectanglesRequest(completionHandler: self.handleDetectedFaces)
fileprivate func handleDetectedFaces(request: VNRequest?, error: Error?) {
if let nsError = error as NSError? {
self.presentAlert("Face Detection Error", error: nsError)
return
}
// Perform drawing on the main thread.
DispatchQueue.main.async {
guard let drawLayer = self.pathLayer,
let results = request?.results as? [VNFaceObservation] else {
return
}
self.draw(faces: results, onImageWithBounds: drawLayer.bounds)
drawLayer.setNeedsDisplay()
}
}
識別圖片,并繪制曲線去標(biāo)示我們的人臉特征(臉的輪廓、左右眼妻献、左右眉毛……)
lazy var faceLandmarkRequest = VNDetectFaceLandmarksRequest(completionHandler: self.handleDetectedFaceLandmarks)
fileprivate func handleDetectedFaceLandmarks(request: VNRequest?, error: Error?) {
if let nsError = error as NSError? {
self.presentAlert("Face Landmark Detection Error", error: nsError)
return
}
// Perform drawing on the main thread.
DispatchQueue.main.async {
guard let drawLayer = self.pathLayer,
let results = request?.results as? [VNFaceObservation] else {
return
}
self.drawFeatures(onFaces: results, onImageWithBounds: drawLayer.bounds)
drawLayer.setNeedsDisplay()
}
}
此時此刻此景,上頭姐妹應(yīng)該附圖一張:biu~~
從上圖可看出圖片中的人臉輪廓以及下方的文字都被識別出來了.
識別文字和二維碼和這個類似就不貼了,只是需要注意的是:
1蛛株、對于文本觀察虚婿,通過檢查屬性來定位單個字符。characterBoxes
// Tell Vision to report bounding box around each character.
textDetectRequest.reportCharacterBoxes = true
2泳挥、對于條形碼觀察,symbologies包含屬性中的有效負(fù)載信息
// Restrict detection to most common symbologies.
barcodeDetectRequest.symbologies = [.QR, .Aztec, .UPCE]
3至朗、對于矩形觀察,通過設(shè)置一些屬性可以起到過濾檢測結(jié)果的需求:
// Customize & configure the request to detect only certain rectangles.
rectDetectRequest.maximumObservations = 8 // Vision currently supports up to 16.
rectDetectRequest.minimumConfidence = 0.6 // Be confident.
rectDetectRequest.minimumAspectRatio = 0.3 // height / width
4屉符、對于地平線角度的觀察,demo里面沒有,但是也比較容易理解.
lazy var horizonRequest = VNDetectHorizonRequest(completionHandler: self.handleDetectedHorizon)
fileprivate func handleDetectedHorizon(request: VNRequest?, error: Error?) {
if let nsError = error as NSError? {
self.presentAlert("Horizon Detection Error", error: nsError)
return
}
guard let results = request?.results as? [VNHorizonObservation] else {
return
}
results.forEach({ observation in
print(observation.angle)//觀察到的地平線的角度。
print(observation.transform)//變換應(yīng)用于檢測到的地平線锹引。
})
}
在創(chuàng)建了所有的請求之后,我們將其加入數(shù)組:
/// - Tag: CreateRequests
fileprivate func createVisionRequests() -> [VNRequest] {
// Create an array to collect all desired requests.
var requests: [VNRequest] = []
// Create & include a request if and only if switch is ON.
if self.rectSwitch.isOn {
requests.append(self.rectangleDetectionRequest)
}
if self.faceSwitch.isOn {
// Break rectangle & face landmark detection into 2 stages to have more fluid feedback in UI.
requests.append(self.faceDetectionRequest)
requests.append(self.faceLandmarkRequest)
}
if self.textSwitch.isOn {
requests.append(self.textDetectionRequest)
}
if self.barcodeSwitch.isOn {
requests.append(self.barcodeDetectionRequest)
}
requests.append(self.horizonRequest)
// Return grouped requests as a single array.
return requests
}
然后再調(diào)用perform去執(zhí)行識別.需要注意的是,識別的圖片的方法要和我們orientation的方向一致(demo代碼中有做處理).其次因為識別的過程比較消耗資源,因此使用后臺隊列以避免在執(zhí)行時阻塞主隊列矗钟。
然后回到上面我們寫的completionHandler,我們拿到VNObservation之后會在主線程去刷新繪制我們的UI.
/// - Tag: PerformRequests
fileprivate func performVisionRequest(image: CGImage, orientation: CGImagePropertyOrientation) {
// Fetch desired requests based on switch status.
let requests = createVisionRequests()
// Create a request handler.
let imageRequestHandler = VNImageRequestHandler(cgImage: image,
orientation: orientation,
options: [:])
// Send the requests to the request handler.
DispatchQueue.global(qos: .userInitiated).async {
do {
try imageRequestHandler.perform(requests)
} catch let error as NSError {
print("Failed to perform image request: \(error)")
self.presentAlert("Image Request Failed", error: error)
return
}
}
}
上面的資料以及demo都是描述檢測靜止圖像中的對象,vision還有個重要的功能就是可以實時去檢測對象,比如:
實時跟蹤用戶臉部demo鏈接
大致實現(xiàn)過程:
1、配置相機(jī)捕獲視頻
2嫌变、識別視頻中的人臉并檢測出相應(yīng)的特征
有一點(diǎn)需要注意的是,VNImageRequestHandler可以檢測靜態(tài)的對象,但是他不能從一幀到下一幀去攜帶信息,所以對于實時的跟蹤我們需要用到VNSequenceRequestHandler.對于request也是需要用到VNTrackObjectRequest
open class VNTrackObjectRequest : VNTrackingRequest {
//使用檢測到的對象觀察創(chuàng)建新的對象跟蹤請求吨艇。
public init(detectedObjectObservation observation: VNDetectedObjectObservation)
public init(detectedObjectObservation observation: VNDetectedObjectObservation, completionHandler: VNRequestCompletionHandler? = nil)
}
整個環(huán)節(jié),比較復(fù)雜的部分就是捕獲視頻之后的識別處理,我們在通過攝像頭捕獲視頻之后,拿到sampleBuffer,如果檢測器沒有檢測到面部則創(chuàng)建VNImageRequestHandler請求檢測面部,一旦檢測面部成功,則通過創(chuàng)建VNTrackObjectRequest去跟蹤檢測它.
核心代碼如下:
guard let requests = self.trackingRequests, !requests.isEmpty else {
//如果檢測器沒有檢測到面部,創(chuàng)建VNImageRequestHandler請求檢測面部
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer,
orientation: exifOrientation,
options: requestHandlerOptions)
do {
guard let detectRequests = self.detectionRequests else {
return
}
try imageRequestHandler.perform(detectRequests)
} catch let error as NSError {
NSLog("Failed to perform FaceRectangleRequest: %@", error)
}
return
}
//一旦檢測面部成功,則通過創(chuàng)建VNTrackObjectRequest去跟蹤檢測它
do {
try self.sequenceRequestHandler.perform(requests,
on: pixelBuffer,
orientation: exifOrientation)
} catch let error as NSError {
NSLog("Failed to perform SequenceRequest: %@", error)
}
var newTrackingRequests = [VNTrackObjectRequest]()
//...(此處省略一抹多的代碼)
do {
try imageRequestHandler.perform(faceLandmarkRequests)
} catch let error as NSError {
NSLog("Failed to perform FaceLandmarkRequest: %@", error)
}
除此之外,Vision還可以和CoreML結(jié)合進(jìn)行分類請求和標(biāo)記圖像等,總之是一個值得學(xué)習(xí)的框架.
有興趣的小伙伴可自行去下載demo來把玩一下.
另外就是和Vision框架看起來會誤會的還有一個框架VisionKit:使用iOS相機(jī)掃描在Notes應(yīng)用程序中捕獲的文檔。目前還是beta版本.
學(xué)習(xí)資料如下:
Swift之Vision 圖像識別框架
iOS黑科技之(AVFoundation)動態(tài)人臉識別(二)
基于iOS8以上版本的AV Foundation框架特性之--AVCaptureDevice