項目背景
最近剛從成都回到大連的一家做對日開發(fā)的公司, 項目需求是維護(hù)既存的業(yè)務(wù)邏輯, 修改API , 以及三方庫更新
業(yè)務(wù)主體就是利用ZXing實現(xiàn)的二維碼掃描, 存本地后批量上傳.
因為是2015年的項目, 因此, 之前的ZXing代碼庫在iOS10 , 11 的系統(tǒng)上無法運行
需要的功能
- 掃描識別各種類型的碼 (條形碼, 二維碼 , 彩色碼 等)
- 開關(guān)閃光燈
- 識別本地圖片
方案
- 更新ZXing庫為ZXingObjC庫
- 使用系統(tǒng)原生代碼進(jìn)行開發(fā)
(據(jù)傳言, 有的二維碼系統(tǒng)原生的識別不出來, 但是ZXingObjC 可以識別 , 具體真假, 有待考證, 暫時沒有碰到)
下面我們就來分別對這兩種方案進(jìn)行實現(xiàn):
系統(tǒng)原生代碼
1. 掃描識別
首先導(dǎo)入庫文件<AVFoundation/AVFoundation.h>
#import <AVFoundation/AVFoundation.h>
接著簽訂需要的代理, 并創(chuàng)建所需要的屬性(這里我只講述關(guān)鍵代碼)
AVCaptureMetadataOutputObjectsDelegate : 用于掃描獲取到數(shù)據(jù)后的回調(diào) , (metadataObjects: 掃描二維碼數(shù)據(jù)信息)
AVCaptureVideoDataOutputSampleBufferDelegate : 根據(jù)光線強(qiáng)弱值打開手電筒的方法 , (brightnessValue: 光線強(qiáng)弱值)
@interface LYCodeScanManager () <AVCaptureMetadataOutputObjectsDelegate, AVCaptureVideoDataOutputSampleBufferDelegate>
@property (nonatomic, strong) AVCaptureSession *session;
@property (nonatomic, strong) AVCaptureVideoDataOutput *videoDataOutput;
@property (nonatomic, strong) AVCaptureVideoPreviewLayer *videoPreviewLayer;
@end
準(zhǔn)備工作做完 , 我們就搞起來吧
// 1、獲取攝像設(shè)備
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// 2滔驶、創(chuàng)建攝像設(shè)備輸入流
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
// 3遇革、創(chuàng)建元數(shù)據(jù)輸出流
AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init];
[metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
// 設(shè)置掃描范圍(每一個取值0~1,以屏幕右上角為坐標(biāo)原點)
// 注:微信二維碼的掃描范圍是整個屏幕揭糕,這里并沒有做處理(可不用設(shè)置);
// 如需限制掃描框范圍萝快,打開下一句注釋代碼并進(jìn)行相應(yīng)調(diào)整
// metadataOutput.rectOfInterest = CGRectMake(0.05, 0.2, 0.7, 0.6);
// 4、創(chuàng)建會話對象
_session = [[AVCaptureSession alloc] init];
// 并設(shè)置會話采集率
_session.sessionPreset = AVCaptureSessionPreset1920x1080;
// 5著角、添加元數(shù)據(jù)輸出流到會話對象
[_session addOutput:metadataOutput];
// 創(chuàng)建攝像數(shù)據(jù)輸出流并將其添加到會話對象上, --> 用于識別光線強(qiáng)弱
self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[_videoDataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:_videoDataOutput];
// 6揪漩、添加攝像設(shè)備輸入流到會話對象
[_session addInput:deviceInput];
// 7、設(shè)置數(shù)據(jù)輸出類型(如下設(shè)置為條形碼和二維碼兼容)吏口,需要將數(shù)據(jù)輸出添加到會話后奄容,才能指定元數(shù)據(jù)類型,否則會報錯
metadataOutput.metadataObjectTypes = @[AVMetadataObjectTypeQRCode, AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypeEAN8Code, AVMetadataObjectTypeCode128Code];
// 8产徊、實例化預(yù)覽圖層, 用于顯示會話對象
_videoPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:_session];
// 保持縱橫比昂勒;填充層邊界
_videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
CGFloat x = 0;
CGFloat y = 0;
CGFloat w = [UIScreen mainScreen].bounds.size.width;
CGFloat h = [UIScreen mainScreen].bounds.size.height;
_videoPreviewLayer.frame = CGRectMake(x, y, w, h);
[currentController.view.layer insertSublayer:_videoPreviewLayer atIndex:0];
// 9、啟動會話
[_session startRunning];
當(dāng)然, 不要忘了在頁面銷毀或者你不需要用它的時候, 一定要先讓他停下來 : [_session stopRunning] 即可
最后, 我們只需要在delegate回調(diào)方法中, 獲取識別出的數(shù)據(jù)就可以了
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection {
NSLog(@"metadataObjects - - %@", metadataObjects);
if (metadataObjects != nil && metadataObjects.count > 0) {
AVMetadataMachineReadableCodeObject *obj = metadataObjects[0];
NSLog(@"%@",[obj stringValue]);
} else {
NSLog(@"暫未識別出掃描的二維碼");
}
}
這樣我們就把掃描出的數(shù)據(jù)打印出來了
2. 開關(guān)閃光燈
剛才 , 我們在創(chuàng)建會話對象時 , 已經(jīng)把攝像設(shè)備的輸入流和輸出流添加上了, 因此我們這邊直接調(diào)用AVCaptureVideoDataOutputSampleBufferDelegate的回調(diào)方法即可獲取到光線的強(qiáng)弱值 , 從而可以控制是否需要閃光燈
(這里創(chuàng)建了一個按鈕lightBtn , 如果需要, 則添加上去, 不需要則移除)
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// 這個方法會時時調(diào)用舟铜,但內(nèi)存很穩(wěn)定
CFDictionaryRef metadataDict = CMCopyDictionaryOfAttachments(NULL,sampleBuffer, kCMAttachmentMode_ShouldPropagate);
NSDictionary *metadata = [[NSMutableDictionary alloc] initWithDictionary:(__bridge NSDictionary*)metadataDict];
CFRelease(metadataDict);
NSDictionary *exifMetadata = [[metadata objectForKey:(NSString *)kCGImagePropertyExifDictionary] mutableCopy];
float brightnessValue = [[exifMetadata objectForKey:(NSString *)kCGImagePropertyExifBrightnessValue] floatValue];
NSLog(@"%f",brightnessValue);
if (brightnessValue < - 1) {
[self.view addSubview:self.lightBtn];
} else {
if (self.isSelectedFlashlightBtn == NO) {
[self removeFlashlightBtn];
}
}
}
lightBtn 點擊方法 , 控制開關(guān)閃光燈
- (void)lightBtnAction:(UIButton *)button {
if (button.selected == NO) {
/** 打開手電筒 */
AVCaptureDevice *captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
if ([captureDevice hasTorch]) {
BOOL locked = [captureDevice lockForConfiguration:&error];
if (locked) {
captureDevice.torchMode = AVCaptureTorchModeOn;
[captureDevice unlockForConfiguration];
}
}
self.isSelectedFlashlightBtn = YES;
button.selected = YES;
} else {
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.2 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([device hasTorch]) {
[device lockForConfiguration:nil];
[device setTorchMode: AVCaptureTorchModeOff];
[device unlockForConfiguration];
}
self.isSelectedFlashlightBtn = NO;
self.flashlightBtn.selected = NO;
[self.flashlightBtn removeFromSuperview];
});
}
}
本地圖片識別
首先就是當(dāng)前controller 進(jìn)入相冊頁面獲取圖片, 正常的簽訂 <UINavigationControllerDelegate, UIImagePickerControllerDelegate> 協(xié)議, 然后push 就好了
UIImagePickerController *imagePicker = [[UIImagePickerController alloc] init];
imagePicker.sourceType = UIImagePickerControllerSourceTypePhotoLibrary;
imagePicker.delegate = self;
[self presentViewController:imagePicker animated:YES completion:nil];
定義一個方法用于返回一個不超過屏幕尺寸的Image
/// 返回一張不超過屏幕尺寸的 image
+ (UIImage *)LY_imageSizeWithScreenImage:(UIImage *)image {
CGFloat imageWidth = image.size.width;
CGFloat imageHeight = image.size.height;
CGFloat screenWidth = SGQRCodeScreenWidth;
CGFloat screenHeight = SGQRCodeScreenHeight;
if (imageWidth <= screenWidth && imageHeight <= screenHeight) {
return image;
}
CGFloat max = MAX(imageWidth, imageHeight);
CGFloat scale = max / (screenHeight * 2.0);
CGSize size = CGSizeMake(imageWidth / scale, imageHeight / scale);
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
回調(diào)方法中調(diào)用對圖片進(jìn)行處理
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary<NSString *,id> *)info {
// 對選取照片的處理戈盈,如果選取的圖片尺寸過大,則壓縮選取圖片谆刨,否則不作處理
UIImage *image = [self LY_imageSizeWithScreenImage:info[UIImagePickerControllerOriginalImage]];
// CIDetector(CIDetector可用于人臉識別)進(jìn)行圖片解析塘娶,從而使我們可以便捷的從相冊中獲取到二維碼
// 聲明一個 CIDetector,并設(shè)定識別類型 CIDetectorTypeQRCode
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:nil options:@{CIDetectorAccuracy: CIDetectorAccuracyHigh}];
// 取得識別結(jié)果
NSArray *features = [detector featuresInImage:[CIImage imageWithCGImage:image.CGImage]];
if (features.count == 0) {
if (self.isOpenLog) {
if (self.delegate && [self.delegate respondsToSelector:@selector(QRCodeAlbumManagerDidReadQRCodeFailure:)]) {
[self.delegate QRCodeAlbumManagerDidReadQRCodeFailure:self];
}
}
[self.currentVC dismissViewControllerAnimated:YES completion:nil];
return;
} else {
for (int index = 0; index < [features count]; index ++) {
CIQRCodeFeature *feature = [features objectAtIndex:index];
NSString *resultStr = feature.messageString;
NSLog(@"相冊中讀取二維碼數(shù)據(jù)信息 - - %@", resultStr);
self.detectorString = resultStr;
}
[self.currentVC dismissViewControllerAnimated:YES completion:^{
}
}
總結(jié) : 總的來說, 還是一個相對比較簡單的需求, 建議寫個manager去控制, 不然代碼太多, 輸出類型的地方建議不要把所有的都寫上, 會導(dǎo)致識別不出來, 親測有效 ... 哈哈, 所以需要用什么就寫什么就好了