本文是筆者用來記錄學(xué)習(xí) iOS
圖像處理相關(guān)的文章灵再,內(nèi)容可能有部分不對芽唇,可以通過評論或私信交流弛车。本文所有代碼都是 Swift 3.0
前言
在上一篇文章 iOS 圖像處理(一):獲取某一點(diǎn)位置的像素 中有提到相機(jī)取色這么一個(gè)功能蛉鹿,其實(shí)這個(gè)功能在上篇文章的基礎(chǔ)上即可實(shí)現(xiàn)滨砍,現(xiàn)在來詳細(xì)講講;
實(shí)現(xiàn)
實(shí)現(xiàn)效果
因?yàn)橄鄼C(jī)功能需要用到真機(jī),要獲取效果圖比較麻煩惋戏,就不上了领追。
筆者思路
方式一
通過相機(jī)獲取得到圖像數(shù)據(jù),根據(jù)取色位置得到像素?cái)?shù)據(jù)响逢;
問題:因?yàn)檫@樣獲取得到的像素?cái)?shù)據(jù)是原數(shù)據(jù)绒窑,所以當(dāng)圖像進(jìn)行了一些變換操作(比如縮放),這時(shí)同樣要對位置進(jìn)行相應(yīng)的變換舔亭,實(shí)現(xiàn)起來相對麻煩些膨,所以不使用這種方式;
方式二
通過相機(jī)獲取得到圖像數(shù)據(jù)钦铺,將圖像數(shù)據(jù)渲染到 CALayer
上订雾,通過 CALayer
來獲取得到特定位置上的像素?cái)?shù)據(jù),這跟上一篇文章的方式二是完全一樣的职抡;
初始化圖像預(yù)覽層:
let previewLayer = CALayer()
internal func setupUI() {
previewLayer.bounds = view.bounds
previewLayer.position = view.center
previewLayer.contentsGravity = kCAGravityResizeAspectFill
previewLayer.setAffineTransform(CGAffineTransform(rotationAngle: CGFloat(M_PI / 2.0)))
view.layer.insertSublayer(previewLayer, at: 0)
}
初始化相機(jī):
let session = AVCaptureSession()
// 相機(jī)數(shù)據(jù)幀接收隊(duì)列
let queue = DispatchQueue(label: "com.camera.video.queue")
// 取色位置
var center: CGPoint = .zero
internal func setupParameter() {
session.beginConfiguration()
session.sessionPreset = AVCaptureSessionPreset1280x720
guard let device = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo),
let deviceInput = try? AVCaptureDeviceInput(device: device) else {
return
}
if session.canAddInput(deviceInput) {
session.addInput(deviceInput)
}
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable: NSNumber(value: kCMPixelFormat_32BGRA)]
videoOutput.alwaysDiscardsLateVideoFrames = true
videoOutput.setSampleBufferDelegate(self, queue: queue)
if session.canAddOutput(videoOutput) {
session.addOutput(videoOutput)
}
session.commitConfiguration()
}
設(shè)置取色位置:
public override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let touch = touches.first else {
return
}
// 獲取用戶點(diǎn)擊位置在圖像上的相對位置
let point = touch.location(in: self.view)
center = point
anchorButton.frame = CGRect(x: point.x - 20, y: point.y - 20 / 2, width: 40, height: 40)
}
獲取特定位置顏色:
public func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
guard let baseAddr = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0) else {
return
}
let width = CVPixelBufferGetWidthOfPlane(imageBuffer, 0)
let height = CVPixelBufferGetHeightOfPlane(imageBuffer, 0)
let bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bimapInfo: CGBitmapInfo = [
.byteOrder32Little,
CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue)]
guard let content = CGContext(data: baseAddr, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bimapInfo.rawValue) else {
return
}
// 如果是從像素?cái)?shù)據(jù)中獲取特定位置像素葬燎,則是按以下被注釋的這部分代碼進(jìn)行獲取
// 其中像素排序?yàn)椋築GRA,index 需要進(jìn)行要應(yīng)的變換
// let data = baseAddr.assumingMemoryBound(to: UInt8.self)
// let index = width * height * 2
// let b = CGFloat(data.advanced(by: index + 0).pointee) / 255
// let g = CGFloat(data.advanced(by: index + 1).pointee) / 255
// let r = CGFloat(data.advanced(by: index + 2).pointee) / 255
// let a = CGFloat(data.advanced(by: index + 3).pointee) / 255
// let color = UIColor(red: r, green: g, blue: b, alpha: a)
guard let cgImage = content.makeImage() else {
return
}
DispatchQueue.main.async {
self.previewLayer.contents = cgImage
// self.previewLayer.pickColor 為 CALayer 的一個(gè)擴(kuò)展
self.colorView.backgroundColor = self.previewLayer.pickColor(at: self.center)
}
}
public extension CALayer {
/// 獲取特定位置的顏色
///
/// - parameter at: 位置
///
/// - returns: 顏色
public func pickColor(at position: CGPoint) -> UIColor? {
// 用來存放目標(biāo)像素值
var pixel = [UInt8](repeatElement(0, count: 4))
// 顏色空間為 RGB缚甩,這決定了輸出顏色的編碼是 RGB 還是其他(比如 YUV)
let colorSpace = CGColorSpaceCreateDeviceRGB()
// 設(shè)置位圖顏色分布為 RGBA
let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue
guard let context = CGContext(data: &pixel, width: 1, height: 1, bitsPerComponent: 8, bytesPerRow: 4, space: colorSpace, bitmapInfo: bitmapInfo) else {
return nil
}
// 設(shè)置 context 原點(diǎn)偏移為目標(biāo)位置所有坐標(biāo)
context.translateBy(x: -position.x, y: -position.y)
// 將圖像渲染到 context 中
render(in: context)
return UIColor(red: CGFloat(pixel[0]) / 255.0,
green: CGFloat(pixel[1]) / 255.0,
blue: CGFloat(pixel[2]) / 255.0,
alpha: CGFloat(pixel[3]) / 255.0)
}
}
本文到此結(jié)束谱净!