面部識(shí)別API不僅可以是識(shí)別面部,也可識(shí)別面部的特殊細(xì)節(jié),例如微笑甚至眨眼睛叠萍。
建立初始項(xiàng)目
原文建好了初始項(xiàng)目,我自己新建了初始項(xiàng)目
- 新建項(xiàng)目Detector
- 刪除IB中原本View Controller Scene吼过。
- 拖動(dòng)
UITabBarController
到IB中,得到三個(gè)Scene咪奖。選擇UITabBarController
的Is Initial View Controller
盗忱,使其作為初始控制器。 - 修改Item 1的title和其Bar Item都為Photo羊赵,修改其
Class
為ViewController
趟佃。 - 向Assets中添加幾張人物圖片
- 想Photo Scene中添加一個(gè)Image View,
Content Mode
改為Aspect Fit昧捷,選擇一個(gè)圖片闲昭。在ViewController
添加圖片對(duì)應(yīng)@IBOutlet:
@IBOutlet var personPic: UIImageView!
- 選中Item 2,點(diǎn)擊菜單欄EDitor > Embed In > Navigation Controller靡挥,新生成一個(gè)與之關(guān)聯(lián)的Scene序矩。
- 新建
CameraViewController
類,繼承至UIViewController跋破。修改上面生成的Scene的Class
屬性為CameraViewController
簸淀。 - 拖動(dòng)一個(gè)
UIBarButtonItem
到Camera View Controller Scene的UINavigationItem
的右邊,并選擇System Item
為Camera - 在
CameraViewController
中建立outlet和Action
識(shí)別照片的面部
- 在
ViewController.swift
中引入CoreImage
:
import CoreImage
- 在
ViewController.swift
中添加函數(shù)detect()
:
func detect() {
// 1
guard let personciImage = CIImage(image: personPic.image!) else {
return
}
// 2
let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
let faces = faceDetector?.features(in: personciImage)
// 3
for face in faces as! [CIFaceFeature] {
print("Found bounds are \(face.bounds)")
let faceBox = UIView(frame: face.bounds)
faceBox.layer.borderWidth = 3
faceBox.layer.borderColor = UIColor.red.cgColor
faceBox.backgroundColor = UIColor.clear
personPic.addSubview(faceBox)
// 4
if face.hasLeftEyePosition {
print("Left eye bounds are \(face.leftEyePosition)")
}
if face.hasRightEyePosition {
print("Right eye bounds are \(face.rightEyePosition)")
}
}
}
- 1 根據(jù)
UIImage
獲取CoreImage
中圖片對(duì)象毒返。guard
與if
功能類似租幕,區(qū)別可查看以擼代碼的形式學(xué)習(xí)Swift-5:Control Flow的6 guard 與 if。 - 2 初始化檢測(cè)器
CIDetector
拧簸,accuray
是檢查器配置選項(xiàng)令蛉,表示精確度;因?yàn)?code>CIDetector可以進(jìn)行幾種類型的檢測(cè)狡恬,所以CIDetectorTypeFace
用來表示面部檢測(cè)珠叔;features
方法返回具體的檢測(cè)結(jié)果 - 3 給每個(gè)檢測(cè)到的臉添加紅色框
- 4 檢測(cè)是否有左眼位置
- 在
viewDidLoad
中添加detect()
,運(yùn)行結(jié)果類似:
打印結(jié)果弟劲,顯示檢測(cè)到的面部位置是不對(duì)的:
Found bounds are (177.0, 416.0, 380.0, 380.0)
這是因?yàn)閁IKit的坐標(biāo)系統(tǒng)與Core Image的坐標(biāo)系統(tǒng)是不同的:
- 把Core Image的坐標(biāo)系統(tǒng)轉(zhuǎn)換為UIKit的坐標(biāo)系統(tǒng)祷安,修改
detect()
為:
func detect() {
guard let personciImage = CIImage(image: personPic.image!) else {
return
}
let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
let faces = faceDetector?.features(in: personciImage)
//
let ciImageSize = personciImage.extent.size
var transform = CGAffineTransform(scaleX: 1, y: -1)
transform = transform.translatedBy(x: 0, y: -ciImageSize.height)
for face in faces as! [CIFaceFeature] {
print("Found bounds are \(face.bounds)")
// Apply the transform to convert the coordinates
var faceViewBounds = face.bounds.applying(transform)
// Calculate the actual position and size of the rectangle in the image view
let viewSize = personPic.bounds.size
let scale = min(viewSize.width / ciImageSize.width,
viewSize.height / ciImageSize.height)
let offsetX = (viewSize.width - ciImageSize.width * scale) / 2
let offsetY = (viewSize.height - ciImageSize.height * scale) / 2
faceViewBounds = faceViewBounds.applying(CGAffineTransform(scaleX: scale, y: scale))
faceViewBounds.origin.x += offsetX
faceViewBounds.origin.y += offsetY
let faceBox = UIView(frame: faceViewBounds)
faceBox.layer.borderWidth = 3
faceBox.layer.borderColor = UIColor.red.cgColor
faceBox.backgroundColor = UIColor.clear
personPic.addSubview(faceBox)
if face.hasLeftEyePosition {
print("Left eye bounds are \(face.leftEyePosition)")
}
if face.hasRightEyePosition {
print("Right eye bounds are \(face.rightEyePosition)")
}
}
}
運(yùn)行可看到正確識(shí)別位置:
相機(jī)拍照后的臉部識(shí)別
之前是項(xiàng)目中照片識(shí)別,現(xiàn)在是拍完照再識(shí)別兔乞,原理是相同的汇鞭,就是多一個(gè)拍完照,取照片的過程庸追。
- 更新
CameraViewController
類的代碼
// 1
class CameraViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
@IBOutlet var imageView: UIImageView!
// 2
let imagePicker = UIImagePickerController()
override func viewDidLoad() {
super.viewDidLoad()
imagePicker.delegate = self
}
@IBAction func takePhoto(_ sender: AnyObject) {
// 3
if !UIImagePickerController.isSourceTypeAvailable(.camera) {
return
}
imagePicker.allowsEditing = false
imagePicker.sourceType = .camera
present(imagePicker, animated: true, completion: nil)
}
// 4
//MARK: -UIImagePickerControllerDelegate
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
if let pickedImage = info[UIImagePickerControllerOriginalImage] as? UIImage {
imageView.contentMode = .scaleAspectFit
imageView.image = pickedImage
}
dismiss(animated: true, completion: nil)
self.detect()
}
// 5
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
dismiss(animated: true, completion: nil)
}
}
- 1 實(shí)現(xiàn)
UIImagePickerControllerDelegate
協(xié)議霍骄,用于拍照相關(guān)代理。 - 2 初始化
UIImagePickerController
淡溯。UIImagePickerController
是照相或攝影界面和功能管理的類读整。 - 3 判斷設(shè)備照相機(jī)是否可用。
- 4 實(shí)現(xiàn)一個(gè)
UIImagePickerControllerDelegate
中的代理方法咱娶,當(dāng)拍攝完備確實(shí)使用照片時(shí)調(diào)用米间。 - 5 也是
UIImagePickerControllerDelegate
中的代理方法强品,取消拍攝時(shí)調(diào)用。
- 添加
detect()
代碼屈糊,與ViewController
中不同的是的榛,不用紅色框框處識(shí)別出的面部,而是識(shí)別出面部的細(xì)節(jié)逻锐,并用UIAlertController
彈出顯示夫晌。
func detect() {
let imageOptions = NSDictionary(object: NSNumber(value: 5) as NSNumber, forKey: CIDetectorImageOrientation as NSString)
let personciImage = CIImage(cgImage: imageView.image!.cgImage!)
let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
let faces = faceDetector?.features(in: personciImage, options: imageOptions as? [String : AnyObject])
if let face = faces?.first as? CIFaceFeature {
print("found bounds are \(face.bounds)")
var message = "有個(gè)臉"
if face.hasSmile {
print("臉是笑的")
message += ",臉是笑的"
}
if face.hasMouthPosition {
print("有嘴唇")
message += ",有嘴唇"
}
if face.hasLeftEyePosition {
print("左眼鏡的位置是 \(face.leftEyePosition)")
message += ",左眼鏡的位置是 \(face.leftEyePosition)"
}
if face.hasRightEyePosition {
print("右眼鏡的位置是 \(face.rightEyePosition)")
message += ",右眼鏡的位置是 \(face.rightEyePosition)"
}
let alert = UIAlertController(title: "嘿嘿", message: message, preferredStyle: .alert)
alert.addAction(UIAlertAction(title: "OK", style: .default, handler: nil))
self.present(alert, animated: true, completion: nil)
} else {
let alert = UIAlertController(title: "沒臉了", message: "沒有檢測(cè)到臉", preferredStyle: .alert)
alert.addAction(UIAlertAction(title: "OK", style: .default, handler: nil))
self.present(alert, animated: true, completion: nil)
}
}
運(yùn)行就可以識(shí)別照片的面部具體細(xì)節(jié)
CIFaceFeature
還提供了其他很多面部細(xì)節(jié):
open var hasLeftEyePosition: Bool { get }
open var leftEyePosition: CGPoint { get }
open var hasRightEyePosition: Bool { get }
open var rightEyePosition: CGPoint { get }
open var hasMouthPosition: Bool { get }
open var mouthPosition: CGPoint { get }
open var hasTrackingID: Bool { get }
open var trackingID: Int32 { get }
open var hasTrackingFrameCount: Bool { get }
open var trackingFrameCount: Int32 { get }
open var hasFaceAngle: Bool { get }
open var faceAngle: Float { get }
open var hasSmile: Bool { get }
open var leftEyeClosed: Bool { get }
open var rightEyeClosed: Bool { get }