前言
由于公司目前業(yè)務(wù)需要用到物體識(shí)別功能,所以需要集成和運(yùn)用Google的TensorFlowLite這個(gè)機(jī)器學(xué)習(xí)的框架!官方給出了swift版本的demo,但是OC版本的都是老版本的,還是c++的函數(shù)居多,也不夠全面和清晰!特別是對(duì)于非量化模型的識(shí)別數(shù)據(jù)處理,更是頭疼!基于此,我參照swift版本的demo,寫了一個(gè)OC版本的,希望對(duì)于不熟悉或者需要OC版本的小伙伴有所幫助...
一:TensorFlowLite的功能
TensorFlowLite主要是應(yīng)用在移動(dòng)端的輕量級(jí)機(jī)器學(xué)習(xí)框架,它支持的主要功能如下圖:
我們業(yè)務(wù)目前需求的是對(duì)象檢測(cè)功能。
-
1.什么是物體檢測(cè)
對(duì)于給定的圖片或者視頻流朽基,對(duì)象檢測(cè)模塊可以識(shí)別出已知的物體和該物體在圖片中的位置褒纲。例如下圖(圖片來(lái)自官網(wǎng))
識(shí)別示例 -
2.物體檢測(cè)模塊輸出
當(dāng)我們?yōu)槟P吞峁﹫D片炊汤,模型將會(huì)返回一個(gè)列表,其中包含檢測(cè)到的對(duì)象活翩,包含對(duì)象矩形框的坐標(biāo)和代表檢測(cè)可信度的分?jǐn)?shù)戳玫。坐標(biāo),輸出數(shù)據(jù)在第0個(gè)數(shù)組,會(huì)根據(jù)每個(gè)檢測(cè)到的物體返回一個(gè)[top,left,bottom,right]的float浮點(diǎn)數(shù)組。該四個(gè)數(shù)字代表了圍繞物體的一個(gè)矩形框(官方的說(shuō)法是坐標(biāo),但是實(shí)際使用是距離比例,需要自己換算成對(duì)應(yīng)的尺寸)匪补。
類別,也就是index,需要自己根據(jù)labels的定義轉(zhuǎn)換具體的類名,index返回后需要+1操作,針對(duì)的是官方的訓(xùn)練模型。
信任分?jǐn)?shù),我們使用信任分?jǐn)?shù)和所檢測(cè)到對(duì)象的坐標(biāo)來(lái)表示檢測(cè)結(jié)果烂翰。分?jǐn)?shù)反應(yīng)了被檢測(cè)到物體的可信度夯缺,范圍在 0 和 1 之間。最大值為1刽酱,數(shù)值越大可信度越高喳逛。
檢測(cè)到的數(shù)量,物體檢測(cè)模塊最多能夠在一張圖中識(shí)別和定位10個(gè)物體.所以一般返回小于10的數(shù)值
- 輸入
模塊使用單個(gè)圖片作為輸入瞧捌。理想的圖片尺寸是 300x300 像素棵里,每個(gè)像素有3個(gè)通道(紅,藍(lán)姐呐,和綠)殿怜。這將反饋給模塊一個(gè) 27000 字節(jié)( 300 x 300 x 3 )的扁平化緩存。由于該模塊經(jīng)過(guò)標(biāo)準(zhǔn)化處理曙砂,每一個(gè)字節(jié)代表了 0 到 255 之間的一個(gè)值头谜。
- 輸入
-
4.輸出
該模型輸出四個(gè)數(shù)組,分別對(duì)應(yīng)索引的 0-4鸠澈。前三個(gè)數(shù)組描述10個(gè)被檢測(cè)到的物體柱告,每個(gè)數(shù)組的最后一個(gè)元素匹配每個(gè)對(duì)象。檢測(cè)到的物體數(shù)量總是10笑陈。
官方截圖
二.實(shí)操,實(shí)實(shí)在在的操作一遍,光蹭蹭不進(jìn)去就是耍流氓
1.初始化識(shí)別器
- (void)setupInterpreter
{
NSError *error;
NSString *path = [[NSBundle mainBundle] pathForResource:@"detect" ofType:@"tflite"];
//初始化識(shí)別器,需要傳入訓(xùn)練模型的路徑,還可以傳options
self.interpreter = [[TFLInterpreter alloc] initWithModelPath:path error:&error];
if (![self.interpreter allocateTensorsWithError:&error]) {
NSLog(@"Create interpreter error: %@", error);
}
}
2.初始化攝像頭
- (void)setupCamera
{
self.session = [[AVCaptureSession alloc] init];
[self.session setSessionPreset:AVCaptureSessionPresetHigh];//開啟高質(zhì)量模式,一般使用16:9
// [self.session setSessionPreset:AVCaptureSessionPreset640x480];//如果需要4:3最好設(shè)置,避免自己裁切
self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];//默認(rèn)
// self.inputDevice = [AVCaptureDevice defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInWideAngleCamera mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionBack];//指定廣角模式和鏡頭
NSError *error;
self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice error:&error];
if ([self.session canAddInput:self.deviceInput]) {
[self.session addInput:self.deviceInput];
}
self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
// [self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];按比例拉伸
CALayer *rootLayer = [[self view] layer];
[rootLayer setMasksToBounds:YES];
CGRect frame = self.view.frame;
[self.previewLayer setFrame:frame];
// [self.previewLayer setFrame:CGRectMake(0, 0, frame.size.width, frame.size.width * 4 / 3)];
[rootLayer insertSublayer:self.previewLayer atIndex:0];
//添加繪制圖層
self.overlayView = [[OverlayView alloc] initWithFrame:self.previewLayer.bounds];
[self.view addSubview:self.overlayView];
self.overlayView.clearsContextBeforeDrawing = YES;//設(shè)置清空畫布上下文
AVCaptureVideoDataOutput *videoDataOutput = [AVCaptureVideoDataOutput new];
NSDictionary *rgbOutputSettings = [NSDictionary
dictionaryWithObject:[NSNumber numberWithInt:kCMPixelFormat_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[videoDataOutput setVideoSettings:rgbOutputSettings];
[videoDataOutput setAlwaysDiscardsLateVideoFrames:YES];
dispatch_queue_t videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
if ([self.session canAddOutput:videoDataOutput])
[self.session addOutput:videoDataOutput];
// [[videoDataOutput connectionWithMediaType:AVMediaTypeVideo] setEnabled:YES];
[videoDataOutput connectionWithMediaType:AVMediaTypeVideo].videoOrientation = AVCaptureVideoOrientationPortrait;//指定鏡頭方向
[self.session startRunning];
}
3.在視頻流回調(diào)代理里面,執(zhí)行我們的一系列旋轉(zhuǎn),跳躍,我閉著眼的操作!弱弱的說(shuō)一句,內(nèi)存問(wèn)題還有壓縮變形變換等問(wèn)題已經(jīng)讓我本來(lái)就不富裕的頭發(fā)雪上加霜,掉發(fā)嚴(yán)重了許多,更慘的是白發(fā)叢生...
#pragma mark------ AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
NSTimeInterval curentInterval = [[NSDate date] timeIntervalSince1970] * 1000;
if (curentInterval - self.previousTime < self.delayBetweenMs) {
return;
}
/*
if (connection.videoOrientation != self.videoOrientation) {
//切換鏡頭方向,如果是官方訓(xùn)練模型不必要切換,可以注釋掉
connection.videoOrientation = self.videoOrientation;
}
*/
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
size_t imageWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t imageHeight = CVPixelBufferGetHeight(pixelBuffer);
//如果需要旋轉(zhuǎn)識(shí)別圖像,可以用下面的方法,但是在iOS13.4上,內(nèi)存釋放有問(wèn)題
/*
CVPixelBufferRef rotatePixel = pixelBuffer;
switch (self.videoOrientation) {
case 1:
rotatePixel = [self rotateBuffer:pixelBuffer withConstant:0];
break;
case 2:
rotatePixel = [self rotateBuffer:pixelBuffer withConstant:2];
break;
case 3:
rotatePixel = [self rotateBuffer:pixelBuffer withConstant:1];
break;
case 4:
rotatePixel = [self rotateBuffer:pixelBuffer withConstant:3];
break;
default:
break;
}
*/
//如果需要裁剪并且縮放識(shí)別圖像,可以用下面方法,需要自己設(shè)定裁剪范圍,并且計(jì)算仿射變換
/*
CGRect videoRect = CGRectMake(0, 0, imageWidth, imageHeight);
CGSize scaledSize = CGSizeMake(300, 300);
// Create a rectangle that meets the output size's aspect ratio, centered in the original video frame
CGSize cropSize = CGSizeZero;
if (imageWidth > imageHeight) {
cropSize = CGSizeMake(imageWidth, imageWidth * 3 /4);
}
else
{
cropSize = CGSizeMake(imageWidth, imageWidth * 4 /3);
}
CGRect centerCroppingRect = AVMakeRectWithAspectRatioInsideRect(cropSize, videoRect);
CVPixelBufferRef croppedAndScaled = [self createCroppedPixelBufferRef:pixelBuffer cropRect:centerCroppingRect scaleSize:scaledSize context:self.context];
*/
//這里用的官方的訓(xùn)練模型,識(shí)別大小為300 * 300,所以直接縮放
CVPixelBufferRef scaledPixelBuffer = [self resized:CGSizeMake(300, 300) cvpixelBuffer:pixelBuffer];
//如果想看看縮放之后的圖像是否滿足要求,可以保存到相冊(cè)
/*
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(1 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
UIImage *image = [self imageFromSampleBuffer:scaledPixelBuffer];
UIImageWriteToSavedPhotosAlbum(image, self, @selector(image:didFinishSavingWithError:contextInfo:), (__bridge void *)self);
});
*/
//TensorFlow 輸入和輸出數(shù)據(jù)處理
NSError *error;
TFLTensor *inputTensor = [self.interpreter inputTensorAtIndex:0 error:&error];
NSData *imageData = [self rgbDataFromBuffer:scaledPixelBuffer isModelQuantized:inputTensor.dataType == TFLTensorDataTypeUInt8];
[inputTensor copyData:imageData error:&error];
[self.interpreter invokeWithError:&error];
if (error) {
NSLog(@"Error++: %@", error);
}
//輸出坐標(biāo),按照top,left,bottom,right的占比
TFLTensor *outputTensor = [self.interpreter outputTensorAtIndex:0 error:&error];
//輸出index
TFLTensor *outputClasses = [self.interpreter outputTensorAtIndex:1 error:nil];
//輸出分?jǐn)?shù)
TFLTensor *outputScores = [self.interpreter outputTensorAtIndex:2 error:nil];
//輸出識(shí)別物體個(gè)數(shù)
TFLTensor *outputCount = [self.interpreter outputTensorAtIndex:3 error:nil];
//格式化輸出的數(shù)據(jù)
NSArray<HFInference *> *inferences = [self formatTensorResultWith:[self transTFLTensorOutputData:outputTensor] indexs:[self transTFLTensorOutputData:outputClasses] scores:[self transTFLTensorOutputData:outputScores] count:[[self transTFLTensorOutputData:outputCount].firstObject integerValue] width:imageWidth height:imageHeight];
NSLog(@"+++++++++++++");
for (HFInference *inference in inferences) {
NSLog(@"rect: %@ index %ld score: %f className: %@\n",NSStringFromCGRect(inference.boundingRect),inference.index,inference.confidence,inference.className);
}
NSLog(@"+++++++++++++");
//切換到主線程繪制
dispatch_async(dispatch_get_main_queue(), ^{
[self drawOverLayWithInferences:inferences width:imageWidth height:imageHeight];
});
}
4 .投喂給識(shí)別器的數(shù)據(jù)處理
因?yàn)楣俜降挠?xùn)練模型只能接受300 *300 *3的圖片數(shù)據(jù),所以我們視頻流把CMSampleBufferRef
縮放成對(duì)應(yīng)的大小
- 1.縮放
//縮放CVPixelBufferRef
- (CVPixelBufferRef)resized:(CGSize)size cvpixelBuffer:(CVPixelBufferRef)pixelBuffer
{
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
// CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
size_t imageWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t imageHeight = CVPixelBufferGetHeight(pixelBuffer);
OSType pixelBufferType = CVPixelBufferGetPixelFormatType(pixelBuffer);
assert(pixelBufferType == kCVPixelFormatType_32BGRA);
size_t sourceRowBytes = CVPixelBufferGetBytesPerRow(pixelBuffer);
NSInteger imageChannels = 4;
unsigned char* sourceBaseAddr = (unsigned char*)(CVPixelBufferGetBaseAddress(pixelBuffer));
vImage_Buffer inbuff = {sourceBaseAddr, (NSUInteger)imageHeight,(NSUInteger)imageWidth, sourceRowBytes};
// NSInteger scaledImageRowBytes = ceil(size.width/4) * 4 * imageChannels;
NSInteger scaledImageRowBytes = vImageByteAlign(size.width * imageChannels , 64);
unsigned char *scaledVImageBuffer = malloc((NSInteger)size.height * scaledImageRowBytes);
if (scaledVImageBuffer == nil) {
return nil;
}
vImage_Buffer outbuff = {scaledVImageBuffer,(NSUInteger)size.height,(NSUInteger)size.width,scaledImageRowBytes};
vImage_Error scaleError = vImageScale_ARGB8888(&inbuff, &outbuff, nil, kvImageHighQualityResampling);
if(scaleError != kvImageNoError){
free(scaledVImageBuffer);
scaledVImageBuffer = NULL;
return nil;
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVPixelBufferRef scaledPixelBuffer = NULL;
// CVReturn status = CVPixelBufferCreateWithBytes(nil, (NSInteger)size.width, (NSInteger)size.height, pixelBufferType, scaledVImageBuffer, scaledImageRowBytes, releaseCallback, nil, nil, &scaledPixelBuffer);
NSDictionary *options =@{(NSString *)kCVPixelBufferCGImageCompatibilityKey:@YES,(NSString *)kCVPixelBufferCGBitmapContextCompatibilityKey:@YES,(NSString *)kCVPixelBufferMetalCompatibilityKey:@YES,(NSString *)kCVPixelBufferWidthKey :[NSNumber numberWithInt: size.width],(NSString *)kCVPixelBufferHeightKey: [NSNumber numberWithInt : size.height],(id)kCVPixelBufferBytesPerRowAlignmentKey:@(32)
};
CVReturn status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, size.width, size.height,pixelBufferType, scaledVImageBuffer, scaledImageRowBytes,releaseCallback,nil, (__bridge CFDictionaryRef)options, &scaledPixelBuffer);
options = NULL;
if (status != kCVReturnSuccess)
{
free(scaledVImageBuffer);
return nil;
}
return scaledPixelBuffer;
}
- 2.轉(zhuǎn)化成輸入數(shù)據(jù)
- (NSData *)rgbDataFromBuffer:(CVPixelBufferRef)pixelBuffer isModelQuantized:(BOOL)isQuantized
{
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
unsigned char* sourceData = (unsigned char*)(CVPixelBufferGetBaseAddress(pixelBuffer));
if (!sourceData) {
return nil;
}
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
size_t sourceRowBytes = (int)CVPixelBufferGetBytesPerRow(pixelBuffer);
int destinationChannelCount = 3;
size_t destinationBytesPerRow = destinationChannelCount * width;
vImage_Buffer inbuff = {sourceData, height, width, sourceRowBytes};
unsigned char *destinationData = malloc(height * destinationBytesPerRow);
if (destinationData == nil) {
return nil;
}
vImage_Buffer outbuff = {destinationData,height,width,destinationBytesPerRow};
if (CVPixelBufferGetPixelFormatType(pixelBuffer) == kCVPixelFormatType_32BGRA)
{
vImageConvert_BGRA8888toRGB888(&inbuff, &outbuff, kvImageNoFlags);
}
else if (CVPixelBufferGetPixelFormatType(pixelBuffer) == kCVPixelFormatType_32ARGB)
{
vImageConvert_ARGB8888toRGB888(&inbuff, &outbuff, kvImageNoFlags);
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVPixelBufferRelease(pixelBuffer);//記得釋放資源
NSData *data = [[NSData alloc] initWithBytes:outbuff.data length:outbuff.rowBytes *height];
if (destinationData != NULL) {
free(destinationData);
destinationData = NULL;
}
if (isQuantized) {
return data;
}
Byte *bytesPtr = (Byte *)[data bytes];
//針對(duì)不是量化模型,需要轉(zhuǎn)換成float類型的數(shù)據(jù)
NSMutableData *rgbData = [[NSMutableData alloc] initWithCapacity:0];
for (int i = 0; i < data.length; i++) {
Byte byte = (Byte)bytesPtr[I];
float bytf = (float)byte / 255.0;
[rgbData appendBytes:&bytf length:sizeof(float)];
}
return rgbData;
}
5.處理識(shí)別返回?cái)?shù)據(jù)
- 識(shí)別結(jié)果返回的是四個(gè)數(shù)組,都需要分別處理,代碼在視頻流回調(diào)代理里
//輸出坐標(biāo),按照top,left,bottom,right的占比
TFLTensor *outputTensor = [self.interpreter outputTensorAtIndex:0 error:&error];
//輸出index
TFLTensor *outputClasses = [self.interpreter outputTensorAtIndex:1 error:nil];
//輸出分?jǐn)?shù)
TFLTensor *outputScores = [self.interpreter outputTensorAtIndex:2 error:nil];
//輸出識(shí)別物體個(gè)數(shù)
TFLTensor *outputCount = [self.interpreter outputTensorAtIndex:3 error:nil];
//格式化輸出的數(shù)據(jù)
NSArray<HFInference *> *inferences = [self formatTensorResultWith:[self transTFLTensorOutputData:outputTensor] indexs:[self transTFLTensorOutputData:outputClasses] scores:[self transTFLTensorOutputData:outputScores] count:[[self transTFLTensorOutputData:outputCount].firstObject integerValue] width:imageWidth height:imageHeight];
- (NSArray<HFInference *> *)formatTensorResultWith:(NSArray *)outputBoundingBox indexs:(NSArray *)indexs scores:(NSArray *)scores count:(NSInteger)count width:(CGFloat)width height:(CGFloat)height
{
NSMutableArray<HFInference *> *arry = [NSMutableArray arrayWithCapacity:count];
for (NSInteger i = 0; i < count; i++) {
CGFloat confidence = [scores[i] floatValue];
if (confidence < 0.5) {
continue;
}
NSInteger index = [indexs[i] integerValue] + 1;//官方模型需要+1;
CGRect rect = CGRectZero;
UIEdgeInsets inset;
[outputBoundingBox[i] getValue:&inset];
rect.origin.y = inset.top;
rect.origin.x = inset.left;
rect.size.height = inset.bottom - rect.origin.y;
rect.size.width = inset.right - rect.origin.x;
CGRect newRect = CGRectApplyAffineTransform(rect, CGAffineTransformMakeScale(width, height));
//如果是自定義并且圖片識(shí)別有方向的話,就用下面的方法
// CGRect newRect = [self fixOriginSizeWithInset:inset videoOrientation:self.videoOrientation width:width height:height];
HFInference *inference = [HFInference new];
inference.confidence = confidence;
inference.index = index;
inference.boundingRect = newRect;
inference.className = [self loadLabels:@"labelmap"][index];
[arry addObject:inference];
}
return arry;
}
- (NSArray *)transTFLTensorOutputData:(TFLTensor *)outpuTensor
{
NSMutableArray * arry = [NSMutableArray array];
float output[40U];
[[outpuTensor dataWithError:nil] getBytes:output length:(sizeof(float) *40U)];
if ([outpuTensor.name isEqualToString:@"TFLite_Detection_PostProcess"]) {
for (NSInteger i = 0; i < 10U; i++) {
// top left bottom right
UIEdgeInsets inset = UIEdgeInsetsMake(output[4* i + 0], output[4* i + 1], output[4* i + 2], output[4* i + 3]);
[arry addObject:[NSValue valueWithUIEdgeInsets:inset]];
}
}
else if ([outpuTensor.name isEqualToString:@"TFLite_Detection_PostProcess:1"] ||[outpuTensor.name isEqualToString:@"TFLite_Detection_PostProcess:2"])
{
for (NSInteger i = 0; i < 10U; i++) {
[arry addObject:[NSNumber numberWithFloat:output[i]]];
}
}
else if ([outpuTensor.name isEqualToString:@"TFLite_Detection_PostProcess:3"])
{
// NSNumber *count = output[0] ? [NSNumber numberWithFloat:output[0]] : [NSNumber numberWithFloat:0.0];
NSNumber *count = @10;
[arry addObject:count];
}
return arry;
}
6.渲染和繪制識(shí)別框
根據(jù)處理好返回的數(shù)據(jù),我們需要轉(zhuǎn)化成繪制的數(shù)據(jù)
- (void)drawOverLayWithInferences:(NSArray<HFInference *> *)inferences width:(CGFloat)width height:(CGFloat)height
{
[self.overlayView.overlays removeAllObjects];
[self.overlayView setNeedsDisplay];
if (inferences.count == 0) {
return;
}
NSMutableArray<Overlayer *> * overlays = @[].mutableCopy;
for (HFInference *inference in inferences) {
CGRect convertedRect = CGRectApplyAffineTransform(inference.boundingRect , CGAffineTransformMakeScale(self.overlayView.bounds.size.width/width, self.overlayView.bounds.size.height / height));
if (convertedRect.origin.x < 0) {
convertedRect.origin.x = 5;
}
if (convertedRect.origin.y <0) {
convertedRect.origin.y = 5;
}
if (CGRectGetMaxY(convertedRect) > CGRectGetMaxY(self.overlayView.bounds)) {
convertedRect.size.height = CGRectGetMaxY(self.overlayView.bounds) - convertedRect.origin.y - 5;
}
if (CGRectGetMaxX(convertedRect) > CGRectGetMaxX(self.overlayView.bounds)) {
convertedRect.size.width = CGRectGetMaxX(self.overlayView.bounds) - convertedRect.origin.x - 5;
}
Overlayer *layer = [Overlayer new];
layer.borderRect = convertedRect;
layer.color = UIColor.redColor;
layer.name = [NSString stringWithFormat:@"%@ %.2f%%",inference.className,inference.confidence *100];
NSDictionary *dic = @{NSFontAttributeName:[UIFont systemFontOfSize:14]};
layer.nameStringSize = [layer.name boundingRectWithSize:CGSizeMake(MAXFLOAT, 20) options:(NSStringDrawingUsesLineFragmentOrigin) attributes:dic context:nil].size;
layer.font = [UIFont systemFontOfSize:14];
layer.nameDirection = self.videoOrientation;
[overlays addObject:layer];
}
self.overlayView.overlays = overlays;
[self.overlayView setNeedsDisplay];
}
總結(jié)
其實(shí)整個(gè)庫(kù)使用起來(lái)還是比較簡(jiǎn)單的(特別指訓(xùn)練全面并且成熟的模型,因?yàn)槿魏畏较蚝徒嵌榷伎梢宰R(shí)別出來(lái),會(huì)超級(jí)省事,而我們自己的模型只支持橫屏的識(shí)別,處理起來(lái)超級(jí)煩!),唯一要注意的點(diǎn)就是內(nèi)存問(wèn)題,float點(diǎn)精度轉(zhuǎn)換問(wèn)題,還有就是坐標(biāo)變換映射問(wèn)題!當(dāng)然什么都不說(shuō)了(說(shuō)多了都是淚,這只是我抽出來(lái)的簡(jiǎn)單demo,還有很多更苦逼的要做)放上demo的 傳送門