apple machine blog
Apple's coremltools quantization
A deep dive into Apple's coremltools quantization and how to reduce the size of a Core ML model without losing accuracy and performance
1.原Core ML簡介及實時目標檢測及Caffe TensorFlow coremltools模型轉(zhuǎn)換
2.CoreML學習——轉(zhuǎn)換caffe模型并應用到 iOS App中
- (NSString *)predictImageScene:(UIImage *)image {
GoogLeNetPlaces *model = [[GoogLeNetPlaces alloc] init];
NSError *error;
UIImage *scaledImage = [image scaleToSize:CGSizeMake(224, 224)];
CVPixelBufferRef buffer = [image pixelBufferFromCGImage:scaledImage];
GoogLeNetPlacesInput *input = [[GoogLeNetPlacesInput alloc] initWithSceneImage:buffer];
GoogLeNetPlacesOutput *output = [model predictionFromFeatures:input error:&error];
return output.sceneLabel;
}
-(void)prediction{
Resnet50 *resnetModel = [[Resnet50 alloc] init];
UIImage *image = showImg.image;
// 從生成的類中加載 ML 模型寇蚊,VNCoreMLModel 只是用于 Vision 請求的 Core ML 模型的容器
//可以在 Vision 模型中包裝任意的圖像分析 Core ML 模型
//標準的 Vision 工作流程是創(chuàng)建模型跋选,創(chuàng)建一或多個請求,然后創(chuàng)建并運行請求處理程序
//如下為創(chuàng)建模型
VNCoreMLModel *vnCoreModel = [VNCoreMLModel modelForMLModel:resnetModel.model error:nil];
//VNCoreMLRequest 是一個圖像分析請求彬祖,它使用 Core ML 模型來完成工作
//它的 completion handler 接收 request 和 error 對象
VNCoreMLRequest *vnCoreMlRequest = [[VNCoreMLRequest alloc] initWithModel:vnCoreModel completionHandler:^(VNRequest * _Nonnull request, NSError * _Nullable error) {
CGFloat confidence = 0.0f;
VNClassificationObservation *tempClassification = nil;
for (VNClassificationObservation *classification in request.results) {
if (classification.confidence > confidence) {
confidence = classification.confidence;
tempClassification = classification;
}
}
recognitionResultLabel.text = [NSString stringWithFormat:@"識別結(jié)果:%@",tempClassification.identifier];
confidenceResult.text = [NSString stringWithFormat:@"匹配率:%@",@(tempClassification.confidence)];
}];
//創(chuàng)建并運行請求處理程序
VNImageRequestHandler *vnImageRequestHandler = [[VNImageRequestHandler alloc] initWithCGImage:image.CGImage options:nil];
NSError *error = nil;
[vnImageRequestHandler performRequests:@[vnCoreMlRequest] error:&error];
if (error) {
NSLog(@"%@",error.localizedDescription);
}
}
3.Custom Layers in Core ML
In this post I’ll show how to convert a Keras model with a custom layer to Core ML.
4.Real-time object detection with YOLO
In this blog post I’ll describe what it took to get the “tiny” version of YOLOv2 running on iOS using Metal Performance Shaders.
Of course I used Forge to build the iOS app. ?? You can find the code in the YOLOfolder. To try it out: download or clone Forge, open Forge.xcworkspace in Xcode 8.3 or later, and run the YOLO target on an iPhone 6 or up.
On my iPhone 6s it takes about 0.15 seconds to process a single image. That is only 6 FPS, barely fast enough to call it realtime.
深度學習在 iOS 上的實踐 —— 通過 YOLO 在 iOS 上實現(xiàn)實時物體檢測
最近發(fā)布的 Caffe2 框架同樣是通過 Metal 來實現(xiàn)在 iOS 上運行的。Caffe2-iOS 項目來自于迷你 YOLO 的一個版本品抽。它似乎比純 Metal 版本運行的慢 0.17 秒每幀储笑。
YAD2K: Yet Another Darknet 2 Keras
5.如何用iOS10的MPS框架實現(xiàn)支持GPU的快速CNN計算
6.手把手教你用蘋果CoreML實現(xiàn)iPhone的目標識別
7.A peek inside Core ML
Running realtime Inception-v3 on Core ML
8.Forge: a neural network toolkit for Metal
Forge is a collection of helper code that makes it a little easier to construct deep neural networks using Apple's MPSCNN framework.
Forge: neural network toolkit for Metal
MPS使用流程
iOS 9在MetalKit中新增了Metal Performance Shaders類,可以使用GPU進行高效的圖像計算圆恤,比如高斯模糊突倍,圖像直方圖計算,索貝爾邊緣檢測算法,實現(xiàn)深度學習等
Metal 介紹及基本使用
mps
inception-v3_demo
metal
Tips
Metal_debugger_tools https://developer.apple.com/library/content/documentation/Miscellaneous/Conceptual/MetalProgrammingGuide/Dev-Technique/Dev-Technique.html
https://developer.apple.com/videos/play/wwdc2015/610/