在iOS上實(shí)現(xiàn)二維碼的掃描
使用AVFoundation
框架
封裝了一個(gè)類(lèi), demo在這里
- 實(shí)例化攝像頭(捕獲)設(shè)備
AVCaptureDevice *captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
注意 :
AVMediaTypeVideo
應(yīng)該是這個(gè)type, 和AVMediaTypeAudio
很像, 別弄錯(cuò)了, 找都找不到 - - 我這里就寫(xiě)錯(cuò)了, 找了半天才發(fā)現(xiàn)是這里錯(cuò)了.......
- 把攝像頭設(shè)置為輸入設(shè)備
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (error) {
NSLog(@"There is no capture device. %@", error);
// terminate the App, no camera!!!
abort();
}
- 設(shè)置輸出
AVCaptureMetadataOutput *output = [[AVCaptureMetadataOutput alloc] init];
CGSize outputViewSize = CGSizeMake(WIDTH - 100, WIDTH - 100);
CGRect outputViewFrame = CGRectMake(WIDTH / 2 - outputViewSize.width / 2,
HEIGHT / 2 - outputViewSize.height / 2,
outputViewSize.width,
outputViewSize.height);
output.rectOfInterest = CGRectMake(outputViewFrame.origin.y / self.frame.size.height,
outputViewFrame.origin.x / self.frame.size.width,
outputViewFrame.size.height / self.frame.size.height,
outputViewFrame.size.width / self.frame.size.width);
- 設(shè)置輸出metadata delegate
[output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
- 捕獲
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPreset640x480;
[session addInput:input];
[session addOutput:output];
[output setMetadataObjectTypes:@[AVMetadataObjectTypeQRCode]];
注意 : 必須在這里設(shè)置MetadataObjectTypes
原因如下
Try adding both the input and output to the session before setting the metadata object types.
When you don't have the camera attached to the session yet,availableMetadataObjectTypes
will be empty.
問(wèn)題 : 即使是在這里設(shè)置, `availableMetadataObjectTypes`仍然為空, 導(dǎo)致設(shè)置失敗...
原因 : 未知(待解決)
已解決 : 參看前文 注意
- 設(shè)置圖像預(yù)覽
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[previewLayer setFrame:self.bounds];
- 添加到視圖layer
[self.layer addSublayer:previewLayer];
寫(xiě)到這里, 已經(jīng)能在view上看到攝像頭的圖像了....是不是很好玩~
而且, 系統(tǒng)也同時(shí)在對(duì)圖像進(jìn)行處理, 如果出現(xiàn)有二維碼的話, 就會(huì)在下面的方法中處理:
#pragma mark AVCaptureMetadataOutputObjectsDelegate
//此方法是在識(shí)別到QRCode并且完成轉(zhuǎn)換班眯,如果QRCode的內(nèi)容越大蚓聘,轉(zhuǎn)換需要的時(shí)間就越長(zhǎng)夹孔。
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection {
// code
if (metadataObjects.count > 0) {
AVMetadataMachineReadableCodeObject *obj = metadataObjects[0];
NSLog(@"QRCode: %@", obj.stringValue);
} else {
NSLog(@"faild");
}
}
要想重寫(xiě)此方法, 需要簽訂AVCaptureMetadataOutputObjectsDelegate
協(xié)議.
在這個(gè)方法里做任何事...