版本記錄
版本號(hào) | 時(shí)間 |
---|---|
V1.0 | 2017.10.29 |
前言
目前世界上科技界的所有大佬一致認(rèn)為人工智能是下一代科技革命淫僻,蘋果作為科技界的巨頭答毫,當(dāng)然也會(huì)緊跟新的科技革命的步伐挣饥,其中ios API 就新出了一個(gè)框架
Core ML
。ML是Machine Learning
的縮寫,也就是機(jī)器學(xué)習(xí)宽涌,這正是現(xiàn)在很火的一個(gè)技術(shù),它也是人工智能最核心的內(nèi)容蝶棋。感興趣的可以看我寫的下面幾篇卸亮。
1. Core ML框架詳細(xì)解析(一) —— Core ML基本概覽
2. Core ML框架詳細(xì)解析(二) —— 獲取模型并集成到APP中
一個(gè)簡(jiǎn)單示例
使用Vision
和Core ML
可以對(duì)圖像進(jìn)行分類,下面看一個(gè)簡(jiǎn)單例子玩裙。
下面我們就看一下代碼兼贸。
1. ImageClassificationViewController.swift
import UIKit
import CoreML
import Vision
import ImageIO
class ImageClassificationViewController: UIViewController {
// MARK: - IBOutlets
@IBOutlet weak var imageView: UIImageView!
@IBOutlet weak var cameraButton: UIBarButtonItem!
@IBOutlet weak var classificationLabel: UILabel!
// MARK: - Image Classification
/// - Tag: MLModelSetup
lazy var classificationRequest: VNCoreMLRequest = {
do {
/*
Use the Swift class `MobileNet` Core ML generates from the model.
To use a different Core ML classifier model, add it to the project
and replace `MobileNet` with that model's generated Swift class.
*/
let model = try VNCoreMLModel(for: MobileNet().model)
let request = VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in
self?.processClassifications(for: request, error: error)
})
request.imageCropAndScaleOption = .centerCrop
return request
} catch {
fatalError("Failed to load Vision ML model: \(error)")
}
}()
/// - Tag: PerformRequests
func updateClassifications(for image: UIImage) {
classificationLabel.text = "Classifying..."
let orientation = CGImagePropertyOrientation(image.imageOrientation)
guard let ciImage = CIImage(image: image) else { fatalError("Unable to create \(CIImage.self) from \(image).") }
DispatchQueue.global(qos: .userInitiated).async {
let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation)
do {
try handler.perform([self.classificationRequest])
} catch {
/*
This handler catches general image processing errors. The `classificationRequest`'s
completion handler `processClassifications(_:error:)` catches errors specific
to processing that request.
*/
print("Failed to perform classification.\n\(error.localizedDescription)")
}
}
}
/// Updates the UI with the results of the classification.
/// - Tag: ProcessClassifications
func processClassifications(for request: VNRequest, error: Error?) {
DispatchQueue.main.async {
guard let results = request.results else {
self.classificationLabel.text = "Unable to classify image.\n\(error!.localizedDescription)"
return
}
// The `results` will always be `VNClassificationObservation`s, as specified by the Core ML model in this project.
let classifications = results as! [VNClassificationObservation]
if classifications.isEmpty {
self.classificationLabel.text = "Nothing recognized."
} else {
// Display top classifications ranked by confidence in the UI.
let topClassifications = classifications.prefix(2)
let descriptions = topClassifications.map { classification in
// Formats the classification for display; e.g. "(0.37) cliff, drop, drop-off".
return String(format: " (%.2f) %@", classification.confidence, classification.identifier)
}
self.classificationLabel.text = "Classification:\n" + descriptions.joined(separator: "\n")
}
}
}
// MARK: - Photo Actions
@IBAction func takePicture() {
// Show options for the source picker only if the camera is available.
guard UIImagePickerController.isSourceTypeAvailable(.camera) else {
presentPhotoPicker(sourceType: .photoLibrary)
return
}
let photoSourcePicker = UIAlertController()
let takePhoto = UIAlertAction(title: "Take Photo", style: .default) { [unowned self] _ in
self.presentPhotoPicker(sourceType: .camera)
}
let choosePhoto = UIAlertAction(title: "Choose Photo", style: .default) { [unowned self] _ in
self.presentPhotoPicker(sourceType: .photoLibrary)
}
photoSourcePicker.addAction(takePhoto)
photoSourcePicker.addAction(choosePhoto)
photoSourcePicker.addAction(UIAlertAction(title: "Cancel", style: .cancel, handler: nil))
present(photoSourcePicker, animated: true)
}
func presentPhotoPicker(sourceType: UIImagePickerControllerSourceType) {
let picker = UIImagePickerController()
picker.delegate = self
picker.sourceType = sourceType
present(picker, animated: true)
}
}
extension ImageClassificationViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {
// MARK: - Handling Image Picker Selection
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String: Any]) {
picker.dismiss(animated: true)
// We always expect `imagePickerController(:didFinishPickingMediaWithInfo:)` to supply the original image.
let image = info[UIImagePickerControllerOriginalImage] as! UIImage
imageView.image = image
updateClassifications(for: image)
}
}
2. CGImagePropertyOrientation+UIImageOrientation.swift
import UIKit
import ImageIO
extension CGImagePropertyOrientation {
/**
Converts a `UIImageOrientation` to a corresponding
`CGImagePropertyOrientation`. The cases for each
orientation are represented by different raw values.
- Tag: ConvertOrientation
*/
init(_ orientation: UIImageOrientation) {
switch orientation {
case .up: self = .up
case .upMirrored: self = .upMirrored
case .down: self = .down
case .downMirrored: self = .downMirrored
case .left: self = .left
case .leftMirrored: self = .leftMirrored
case .right: self = .right
case .rightMirrored: self = .rightMirrored
}
}
}
這里使用的模型就是MobileNet.mlmodel
,下面看一下詳細(xì)信息以及在github上的地址吃溅。
MobileNets are based on a streamlined architecture that have depth-wise separable convolutions to build light weight deep neural networks. Trained on ImageNet with categories such as trees, animals, food, vehicles, person etc. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications https://github.com/shicai/MobileNet-Caffe
這里我直接拍了兩張照片進(jìn)行識(shí)別寝受,結(jié)果是可以正確的識(shí)別出鍵盤,但是沒有識(shí)別出卷紙罕偎。具體如下圖所示很澄。
詳細(xì)說(shuō)明
1. 總體概括
使用Core ML
框架,您可以使用訓(xùn)練好的機(jī)器學(xué)習(xí)模型對(duì)輸入數(shù)據(jù)進(jìn)行分類颜及。 Vision框架與Core ML合作甩苛,將分類模型應(yīng)用于圖像,并對(duì)這些圖像進(jìn)行預(yù)處理俏站,使機(jī)器學(xué)習(xí)任務(wù)更輕松讯蒲,更可靠。
此示例應(yīng)用程序使用開源MobileNet模型(幾種available classification models之一)肄扎,來(lái)對(duì)1000個(gè)分類類別來(lái)識(shí)別圖像墨林,如下面的示例截圖所示。
2. Preview the Sample App - 預(yù)覽示例應(yīng)用程序
要查看此示例應(yīng)用程序的操作犯祠,構(gòu)建并運(yùn)行項(xiàng)目旭等,然后使用示例應(yīng)用程序工具欄中的按鈕拍攝照片或從照片庫(kù)中選擇圖像。 然后衡载,示例應(yīng)用程序使用Vision將Core ML模型應(yīng)用于所選擇的圖像搔耕,并顯示所得到的分類標(biāo)簽以及指示每個(gè)分類的置信水平的數(shù)字。 它按照模型分配給每個(gè)的置信分?jǐn)?shù)的順序顯示前兩個(gè)分類痰娱。
3. Set Up Vision with a Core ML Model - 用Core ML模型設(shè)置視覺
Core ML自動(dòng)生成Swift類 - 在此示例中弃榨,MobileNet
類可輕松訪問(wèn)ML模型。 要使用模型設(shè)置Vision請(qǐng)求梨睁,請(qǐng)創(chuàng)建該類的實(shí)例鲸睛,并使用其model
屬性創(chuàng)建VNCoreMLRequest
對(duì)象。 運(yùn)行請(qǐng)求后坡贺,使用請(qǐng)求對(duì)象的完成處理程序來(lái)指定從模型接收結(jié)果的方法官辈。
// Listing 1
let model = try VNCoreMLModel(for: MobileNet().model)
let request = VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in
self?.processClassifications(for: request, error: error)
})
request.imageCropAndScaleOption = .centerCrop
return request
ML
模型以固定的寬高比處理輸入圖像箱舞,但輸入圖像可以具有任意的縱橫比,因此Vision必須縮放或裁剪圖像以適合钧萍。 為獲得最佳效果褐缠,請(qǐng)?jiān)O(shè)置請(qǐng)求的imageCropAndScaleOption
屬性以匹配模型訓(xùn)練的圖像布局。 對(duì)于available classification models模型风瘦,VNImageCropAndScaleOptionCenterCrop
選項(xiàng)是適當(dāng)?shù)亩游海橇碛姓f(shuō)明。
4. Run the Vision Request - 運(yùn)行視覺請(qǐng)求
使用要處理的圖像創(chuàng)建VNImageRequestHandler
對(duì)象万搔,并將請(qǐng)求傳遞給該對(duì)象的performRequests:error:
方法胡桨。 此方法同步運(yùn)行 - 使用后臺(tái)隊(duì)列,以便在請(qǐng)求執(zhí)行時(shí)主隊(duì)列不被阻塞瞬雹。
// Listing 2
DispatchQueue.global(qos: .userInitiated).async {
let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation)
do {
try handler.perform([self.classificationRequest])
} catch {
/*
This handler catches general image processing errors. The `classificationRequest`'s
completion handler `processClassifications(_:error:)` catches errors specific
to processing that request.
*/
print("Failed to perform classification.\n\(error.localizedDescription)")
}
}
大多數(shù)模型都對(duì)已經(jīng)正確定向顯示的圖像進(jìn)行了訓(xùn)練昧谊。 為了確保以任意方向正確處理輸入圖像,請(qǐng)將圖像的方向傳遞給圖像請(qǐng)求處理程序酗捌。 (此示例應(yīng)用程序向CGImagePropertyOrientation
類型添加了一個(gè)初始化程序init(_:)
呢诬,用于從 UIImageOrientation
方向值轉(zhuǎn)換。)
5. Handle Image Classification Results - 處理圖像分類結(jié)果
Vision請(qǐng)求的完成處理程序指示請(qǐng)求是成功還是導(dǎo)致錯(cuò)誤胖缤。 如果成功尚镰,其results
屬性包含描述ML模型識(shí)別的可能分類的VNClassificationObservation
對(duì)象。
//Listing 3
func processClassifications(for request: VNRequest, error: Error?) {
DispatchQueue.main.async {
guard let results = request.results else {
self.classificationLabel.text = "Unable to classify image.\n\(error!.localizedDescription)"
return
}
// The `results` will always be `VNClassificationObservation`s, as specified by the Core ML model in this project.
let classifications = results as! [VNClassificationObservation]
后記
未完哪廓,待續(xù)~~~