基于Firebase平臺開發(fā)(二) —— 基于ML Kit的iOS圖片中文字的識別(二)

版本記錄

版本號 時間
V1.0 2019.02.01 星期五

前言

Firebase是一家實時后端數(shù)據(jù)庫創(chuàng)業(yè)公司信夫,它能幫助開發(fā)者很快的寫出Web端和移動端的應用。自2014年10月Google收購Firebase以來苹粟,用戶可以在更方便地使用Firebase的同時剧包,結(jié)合Google的云服務。Firebase能讓你的App從零到一融击。也就是說它可以幫助手機以及網(wǎng)頁應用的開發(fā)者輕松構(gòu)建App。通過Firebase背后負載的框架就可以簡單地開發(fā)一個App雳窟,無需服務器以及基礎設施尊浪。接下來幾篇我們就一起看一下基于Firebase平臺的開發(fā)。感興趣的看下面幾篇文章封救。
1. 基于Firebase平臺開發(fā)(一) —— 基于ML Kit的iOS圖片中文字的識別(一)

源碼

1. Swift

首先看下代碼組織結(jié)構(gòu)

看下sb中的內(nèi)容

下面就是源碼了

1. AppDelegate.swift
import UIKit
import Firebase

@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
  var window: UIWindow?
  
  override init() {
    FirebaseApp.configure()
  }
}
2. ViewController.swift
import UIKit
import MobileCoreServices

class ViewController: UIViewController {
  @IBOutlet weak var imageView: UIImageView!
  @IBOutlet weak var textView: UITextView!
  @IBOutlet weak var cameraButton: UIButton!
  
  let processor = ScaledElementProcessor()
  var frameSublayer = CALayer()
  var scannedText: String = "Detected text can be edited here." {
    didSet {
      textView.text = scannedText
    }
  }
  
  override func viewDidLoad() {
    super.viewDidLoad()
    // Notifications to slide the keyboard up
    NotificationCenter.default.addObserver(self, selector: #selector(ViewController.keyboardWillShow), name: UIResponder.keyboardWillShowNotification, object: nil)
    NotificationCenter.default.addObserver(self, selector: #selector(ViewController.keyboardWillHide), name: UIResponder.keyboardWillHideNotification, object: nil)
    
    imageView.layer.addSublayer(frameSublayer)
    drawFeatures(in: imageView)
  }
    
  // MARK: Touch handling to dismiss keyboard
  override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
    if let evt = event, let tchs = evt.touches(for: view), tchs.count > 0 {
      textView.resignFirstResponder()
    }
  }
  
  // MARK: Actions
  @IBAction func cameraDidTouch(_ sender: UIButton) {
    if UIImagePickerController.isSourceTypeAvailable(.camera) {
      presentImagePickerController(withSourceType: .camera)
    } else {
      let alert = UIAlertController(title: "Camera Not Available", message: "A camera is not available. Please try picking an image from the image library instead.", preferredStyle: .alert)
      alert.addAction(UIAlertAction(title: "OK", style: .default, handler: nil))
      present(alert, animated: true, completion: nil)
    }
  }
  
  @IBAction func libraryDidTouch(_ sender: UIButton) {
    presentImagePickerController(withSourceType: .photoLibrary)
  }
  
  @IBAction func shareDidTouch(_ sender: UIBarButtonItem) {
    let vc = UIActivityViewController(activityItems: [scannedText, imageView.image!], applicationActivities: [])
    present(vc, animated: true, completion: nil)
  }
  
  // MARK: Keyboard slide up
  @objc func keyboardWillShow(notification: NSNotification) {
    if let keyboardSize = (notification.userInfo?[UIResponder.keyboardFrameBeginUserInfoKey] as? NSValue)?.cgRectValue {
      if view.frame.origin.y == 0 {
        view.frame.origin.y -= keyboardSize.height
      }
    }
  }
  
  @objc func keyboardWillHide(notification: NSNotification) {
    if let keyboardSize = (notification.userInfo?[UIResponder.keyboardFrameBeginUserInfoKey] as? NSValue)?.cgRectValue {
      if view.frame.origin.y != 0 {
        view.frame.origin.y += keyboardSize.height
      }
    }
  }
  
  private func removeFrames() {
    guard let sublayers = frameSublayer.sublayers else { return }
    for sublayer in sublayers {
      sublayer.removeFromSuperlayer()
    }
  }
  
  // 1
  private func drawFeatures(in imageView: UIImageView, completion: (() -> Void)? = nil) {
    removeFrames()
    processor.process(in: imageView) { text, elements in
      elements.forEach() { element in
        self.frameSublayer.addSublayer(element.shapeLayer)
      }
      self.scannedText = text
      completion?()
    }
  }
}

extension ViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate, UIPopoverPresentationControllerDelegate {
  // MARK: UIImagePickerController
  
  private func presentImagePickerController(withSourceType sourceType: UIImagePickerController.SourceType) {
    let controller = UIImagePickerController()
    controller.delegate = self
    controller.sourceType = sourceType
    controller.mediaTypes = [String(kUTTypeImage), String(kUTTypeMovie)]
    present(controller, animated: true, completion: nil)
  }
  
  // MARK: UIImagePickerController Delegate
  
  func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]) {
    if let pickedImage =
      info[UIImagePickerController.InfoKey.originalImage] as? UIImage {
      
      imageView.contentMode = .scaleAspectFit
      let fixedImage = pickedImage.fixOrientation()
      imageView.image = fixedImage
      drawFeatures(in: imageView)
    }
    dismiss(animated: true, completion: nil)
  }
}
3. +UIImage.swift
import UIKit

extension UIImage {
  // Thx to: https://stackoverflow.com/questions/8915630/ios-uiimageview-how-to-handle-uiimage-image-orientation
  func fixOrientation() -> UIImage? {
    guard let cgImage = cgImage else {
      return nil
    }
    
    if imageOrientation == .up {
      return self
    }
    
    let width  = self.size.width
    let height = self.size.height
    
    var transform = CGAffineTransform.identity
    
    switch imageOrientation {
    case .down, .downMirrored:
      transform = transform.translatedBy(x: width, y: height)
      transform = transform.rotated(by: CGFloat.pi)
    case .left, .leftMirrored:
      transform = transform.translatedBy(x: width, y: 0)
      transform = transform.rotated(by: 0.5*CGFloat.pi)
    case .right, .rightMirrored:
      transform = transform.translatedBy(x: 0, y: height)
      transform = transform.rotated(by: -0.5*CGFloat.pi)
    case .up, .upMirrored:
      break
    }
    
    // Now we draw the underlying CGImage into a new context, applying the transform
    // calculated above.
    guard let colorSpace = cgImage.colorSpace else {
      return nil
    }
    
    guard let context = CGContext(
      data: nil,
      width: Int(width),
      height: Int(height),
      bitsPerComponent: cgImage.bitsPerComponent,
      bytesPerRow: 0,
      space: colorSpace,
      bitmapInfo: UInt32(cgImage.bitmapInfo.rawValue)
      ) else {
        return nil
    }
    
    context.concatenate(transform);
    
    switch imageOrientation {
    case .left, .leftMirrored, .right, .rightMirrored:
      // Grr...
      context.draw(cgImage, in: CGRect(x: 0, y: 0, width: height, height: width))
    default:
      context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
    }
    
    // And now we just create a new UIImage from the drawing context
    guard let newCGImg = context.makeImage() else {
      return nil
    }
    
    let img = UIImage(cgImage: newCGImg)
    
    return img;
  }
}
4. ScaledElementProcessor.swift
import Firebase

struct ScaledElement {
  let frame: CGRect
  let shapeLayer: CALayer
}

class ScaledElementProcessor {
    let vision = Vision.vision()
    var textRecognizer: VisionTextRecognizer!
    
    init() {
        textRecognizer = vision.onDeviceTextRecognizer()
    }

  func process(in imageView: UIImageView, callback: @escaping (_ text: String, _ scaledElements: [ScaledElement]) -> Void) {
    guard let image = imageView.image else { return }
    let visionImage = VisionImage(image: image)
    
    textRecognizer.process(visionImage) { result, error in
      guard error == nil, let result = result, !result.text.isEmpty else {
        callback("", [])
        return
      }
      
      var scaledElements: [ScaledElement] = []
      for block in result.blocks {
        for line in block.lines {
          for element in line.elements {
            let frame = self.createScaledFrame(featureFrame: element.frame, imageSize: image.size, viewFrame: imageView.frame)
            
            let shapeLayer = self.createShapeLayer(frame: frame)
            let scaledElement = ScaledElement(frame: frame, shapeLayer: shapeLayer)
            scaledElements.append(scaledElement)
          }
        }
      }
      
      callback(result.text, scaledElements)
    }
  }

  private func createShapeLayer(frame: CGRect) -> CAShapeLayer {
    let bpath = UIBezierPath(rect: frame)
    let shapeLayer = CAShapeLayer()
    shapeLayer.path = bpath.cgPath
    shapeLayer.strokeColor = Constants.lineColor
    shapeLayer.fillColor = Constants.fillColor
    shapeLayer.lineWidth = Constants.lineWidth
    return shapeLayer
  }
  
  private func createScaledFrame(featureFrame: CGRect, imageSize: CGSize, viewFrame: CGRect) -> CGRect {
    let viewSize = viewFrame.size
    
    let resolutionView = viewSize.width / viewSize.height
    let resolutionImage = imageSize.width / imageSize.height
    
    var scale: CGFloat
    if resolutionView > resolutionImage {
      scale = viewSize.height / imageSize.height
    } else {
      scale = viewSize.width / imageSize.width
    }
    
    let featureWidthScaled = featureFrame.size.width * scale
    let featureHeightScaled = featureFrame.size.height * scale
    
    let imageWidthScaled = imageSize.width * scale
    let imageHeightScaled = imageSize.height * scale
    let imagePointXScaled = (viewSize.width - imageWidthScaled) / 2
    let imagePointYScaled = (viewSize.height - imageHeightScaled) / 2
    
    let featurePointXScaled = imagePointXScaled + featureFrame.origin.x * scale
    let featurePointYScaled = imagePointYScaled + featureFrame.origin.y * scale
    
    return CGRect(x: featurePointXScaled, y: featurePointYScaled, width: featureWidthScaled, height: featureHeightScaled)
  }

  // MARK: - private
  
  private enum Constants {
    static let lineWidth: CGFloat = 3.0
    static let lineColor = UIColor.yellow.cgColor
    static let fillColor = UIColor.clear.cgColor
  }
}

后記

本篇主要講述了基于ML Kit的iOS圖片中文字的識別拇涤,感興趣的給個贊或者關(guān)注~~~

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市誉结,隨后出現(xiàn)的幾起案子鹅士,更是在濱河造成了極大的恐慌,老刑警劉巖搓彻,帶你破解...
    沈念sama閱讀 218,682評論 6 507
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件如绸,死亡現(xiàn)場離奇詭異嘱朽,居然都是意外死亡旭贬,警方通過查閱死者的電腦和手機,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,277評論 3 395
  • 文/潘曉璐 我一進店門搪泳,熙熙樓的掌柜王于貴愁眉苦臉地迎上來稀轨,“玉大人,你說我怎么就攤上這事岸军》芄簦” “怎么了瓦侮?”我有些...
    開封第一講書人閱讀 165,083評論 0 355
  • 文/不壞的土叔 我叫張陵,是天一觀的道長佣谐。 經(jīng)常有香客問我肚吏,道長,這世上最難降的妖魔是什么狭魂? 我笑而不...
    開封第一講書人閱讀 58,763評論 1 295
  • 正文 為了忘掉前任罚攀,我火速辦了婚禮,結(jié)果婚禮上雌澄,老公的妹妹穿的比我還像新娘斋泄。我一直安慰自己,他們只是感情好镐牺,可當我...
    茶點故事閱讀 67,785評論 6 392
  • 文/花漫 我一把揭開白布炫掐。 她就那樣靜靜地躺著,像睡著了一般睬涧。 火紅的嫁衣襯著肌膚如雪募胃。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,624評論 1 305
  • 那天畦浓,我揣著相機與錄音摔认,去河邊找鬼。 笑死宅粥,一個胖子當著我的面吹牛参袱,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播秽梅,決...
    沈念sama閱讀 40,358評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼抹蚀,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了企垦?” 一聲冷哼從身側(cè)響起环壤,我...
    開封第一講書人閱讀 39,261評論 0 276
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎钞诡,沒想到半個月后郑现,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,722評論 1 315
  • 正文 獨居荒郊野嶺守林人離奇死亡荧降,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,900評論 3 336
  • 正文 我和宋清朗相戀三年接箫,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片朵诫。...
    茶點故事閱讀 40,030評論 1 350
  • 序言:一個原本活蹦亂跳的男人離奇死亡辛友,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出剪返,到底是詐尸還是另有隱情废累,我是刑警寧澤邓梅,帶...
    沈念sama閱讀 35,737評論 5 346
  • 正文 年R本政府宣布,位于F島的核電站邑滨,受9級特大地震影響日缨,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜掖看,卻給世界環(huán)境...
    茶點故事閱讀 41,360評論 3 330
  • 文/蒙蒙 一殿遂、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧乙各,春花似錦墨礁、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,941評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至蹲坷,卻和暖如春驶乾,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背循签。 一陣腳步聲響...
    開封第一講書人閱讀 33,057評論 1 270
  • 我被黑心中介騙來泰國打工级乐, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留拉讯,地道東北人抡爹。 一個月前我還...
    沈念sama閱讀 48,237評論 3 371
  • 正文 我出身青樓,卻偏偏與公主長得像产捞,于是被迫代替她去往敵國和親乞旦。 傳聞我的和親對象是個殘疾皇子贼穆,可洞房花燭夜當晚...
    茶點故事閱讀 44,976評論 2 355

推薦閱讀更多精彩內(nèi)容