前言
因需求需要,需要實現(xiàn)人臉檢測功能慨仿,這次正好將這個功能整理了一下久脯,簡單的寫了一個Demo。代碼有點亂镰吆,不過帘撰,也不怎么想花時間去改了,感覺層次方面還算比較清晰的万皿,好了摧找,進(jìn)入正題核行。
一、導(dǎo)入框架蹬耘,實現(xiàn)自定義相機(jī)
1芝雪、導(dǎo)入框架
#import <AVFoundation/AVFoundation.h>
#import <CoreImage/CoreImage.h>
2、實現(xiàn)自定義相機(jī)
2.1初始化相機(jī)
#pragma mark - 初始化相機(jī)
- (void)getCameraSession
{
//初始化會話
_captureSession=[[AVCaptureSession alloc]init];
if ([_captureSession canSetSessionPreset:AVCaptureSessionPreset1280x720]) {//設(shè)置分辨率
_captureSession.sessionPreset = AVCaptureSessionPreset1280x720;
}
//獲得輸入設(shè)備
AVCaptureDevice *captureDevice=[self getCameraDeviceWithPosition:AVCaptureDevicePositionFront];//取得前置攝像頭
if (!captureDevice) {
NSLog(@"取得前置攝像頭時出現(xiàn)問題.");
return;
}
NSError *error=nil;
//根據(jù)輸入設(shè)備初始化設(shè)備輸入對象综苔,用于獲得輸入數(shù)據(jù)
_captureDeviceInput=[[AVCaptureDeviceInput alloc]initWithDevice:captureDevice error:&error];
if (error) {
NSLog(@"取得設(shè)備輸入對象時出錯惩系,錯誤原因:%@",error.localizedDescription);
return;
}
[_captureSession addInput:_captureDeviceInput];
//初始化設(shè)備輸出對象,用于獲得輸出數(shù)據(jù)
_captureStillImageOutput=[[AVCaptureStillImageOutput alloc]init];
NSDictionary *outputSettings = @{AVVideoCodecKey:AVVideoCodecJPEG};
[_captureStillImageOutput setOutputSettings:outputSettings];//輸出設(shè)置
//將設(shè)備輸入添加到會話中
if ([_captureSession canAddInput:_captureDeviceInput]) {
[_captureSession addInput:_captureDeviceInput];
}
//將設(shè)備輸出添加到會話中
if ([_captureSession canAddOutput:_captureStillImageOutput]) {
[_captureSession addOutput:_captureStillImageOutput];
}
//創(chuàng)建視頻預(yù)覽層休里,用于實時展示攝像頭狀態(tài)
_captureVideoPreviewLayer=[[AVCaptureVideoPreviewLayer alloc]initWithSession:self.captureSession];
CALayer *layer=self.videoMainView.layer;
layer.masksToBounds=YES;
_captureVideoPreviewLayer.frame=layer.bounds;
_captureVideoPreviewLayer.videoGravity=AVLayerVideoGravityResizeAspectFill;//填充模式
//將視頻預(yù)覽層添加到界面中
[layer addSublayer:_captureVideoPreviewLayer];
[layer insertSublayer:_captureVideoPreviewLayer below:self.focusCursor.layer];
}
三蛆挫、獲取相機(jī)數(shù)據(jù)流
因為我需要動態(tài)進(jìn)行人臉識別,所以需要啟用數(shù)據(jù)流妙黍,在這里需要設(shè)置并遵守代理
// 遵守代理
<AVCaptureVideoDataOutputSampleBufferDelegate>
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
dispatch_queue_t queue;
queue = dispatch_queue_create("myQueue", DISPATCH_QUEUE_SERIAL);
[captureOutput setSampleBufferDelegate:self queue:queue];
NSString *key = (NSString *)kCVPixelBufferPixelFormatTypeKey;
NSNumber *value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary *settings = @{key:value};
[captureOutput setVideoSettings:settings];
[self.captureSession addOutput:captureOutput];
四悴侵、實現(xiàn)相機(jī)數(shù)據(jù)流的代理方法
#pragma mark - Samle Buffer Delegate
// 抽樣緩存寫入時所調(diào)用的委托程序
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
}
// 這個方法是將數(shù)據(jù)流的幀轉(zhuǎn)換成圖片
//在該代理方法中,sampleBuffer是一個Core Media對象拭嫁,可以引入Core Video供使用
// 通過抽樣緩存數(shù)據(jù)創(chuàng)建一個UIImage對象
- (UIImage *)imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext createCGImage:ciImage fromRect:CGRectMake(0, 0, CVPixelBufferGetWidth(imageBuffer), CVPixelBufferGetHeight(imageBuffer))];
UIImage *result = [[UIImage alloc] initWithCGImage:videoImage scale:1.0 orientation:UIImageOrientationLeftMirrored];
CGImageRelease(videoImage);
return result;
}
五可免、對圖片進(jìn)行處理
在這里需要說明一下,因為上面的方法轉(zhuǎn)換出來的圖片都是反過來的做粤,所以需要再轉(zhuǎn)一下
/**
* 用來處理圖片翻轉(zhuǎn)90度
*
* @param aImage
*
* @return
*/
- (UIImage *)fixOrientation:(UIImage *)aImage
{
// No-op if the orientation is already correct
if (aImage.imageOrientation == UIImageOrientationUp)
return aImage;
CGAffineTransform transform = CGAffineTransformIdentity;
switch (aImage.imageOrientation) {
case UIImageOrientationDown:
case UIImageOrientationDownMirrored:
transform = CGAffineTransformTranslate(transform, aImage.size.width, aImage.size.height);
transform = CGAffineTransformRotate(transform, M_PI);
break;
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
transform = CGAffineTransformTranslate(transform, aImage.size.width, 0);
transform = CGAffineTransformRotate(transform, M_PI_2);
break;
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
transform = CGAffineTransformTranslate(transform, 0, aImage.size.height);
transform = CGAffineTransformRotate(transform, -M_PI_2);
break;
default:
break;
}
switch (aImage.imageOrientation) {
case UIImageOrientationUpMirrored:
case UIImageOrientationDownMirrored:
transform = CGAffineTransformTranslate(transform, aImage.size.width, 0);
transform = CGAffineTransformScale(transform, -1, 1);
break;
case UIImageOrientationLeftMirrored:
case UIImageOrientationRightMirrored:
transform = CGAffineTransformTranslate(transform, aImage.size.height, 0);
transform = CGAffineTransformScale(transform, -1, 1);
break;
default:
break;
}
// Now we draw the underlying CGImage into a new context, applying the transform
// calculated above.
CGContextRef ctx = CGBitmapContextCreate(NULL, aImage.size.width, aImage.size.height,
CGImageGetBitsPerComponent(aImage.CGImage), 0,
CGImageGetColorSpace(aImage.CGImage),
CGImageGetBitmapInfo(aImage.CGImage));
CGContextConcatCTM(ctx, transform);
switch (aImage.imageOrientation) {
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
// Grr...
CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.height,aImage.size.width), aImage.CGImage);
break;
default:
CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.width,aImage.size.height), aImage.CGImage);
break;
}
// And now we just create a new UIImage from the drawing context
CGImageRef cgimg = CGBitmapContextCreateImage(ctx);
UIImage *img = [UIImage imageWithCGImage:cgimg];
CGContextRelease(ctx);
CGImageRelease(cgimg);
return img;
}
六浇借、利用CoreImage中的detectFace進(jìn)行人臉檢測
/**識別臉部*/
-(NSArray *)detectFaceWithImage:(UIImage *)faceImag
{
//此處是CIDetectorAccuracyHigh,若用于real-time的人臉檢測怕品,則用CIDetectorAccuracyLow妇垢,更快
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil
options:@{CIDetectorAccuracy: CIDetectorAccuracyHigh}];
CIImage *ciimg = [CIImage imageWithCGImage:faceImag.CGImage];
NSArray *features = [faceDetector featuresInImage:ciimg];
return features;
}
七、總結(jié)
我的思路是將相機(jī)里獲取的數(shù)據(jù)肉康,通過代理方法闯估,將幀轉(zhuǎn)換成每一張圖片,拿到圖片吼和,去實現(xiàn)人臉識別涨薪。功能沒問題,但是很耗性能炫乓,但是暫時我不太清楚還有什么好的方法來實現(xiàn)刚夺,如果有什么好的方法,也可以留言告訴我末捣,感謝侠姑!亦或者對我寫的有些疑問也可以留言,看到我會第一時間回復(fù)的箩做,當(dāng)然也可以電郵我:gzd1214@163.com莽红,謝謝!