簡介
雖然各廠商為我們提供了優(yōu)質(zhì)的人臉識(shí)別SDK估蹄,但其中包含了較多的無意義代碼次绘,例如錯(cuò)誤處理始锚,檢測(cè)刽酱,剖析,而開發(fā)者在接入時(shí)往往不是非常關(guān)心這些事情瞧捌,RxArcFace旨在將虹軟人臉識(shí)別SDK的模板化操作封裝棵里,并結(jié)合RxJava2,帶給開發(fā)者流暢的開發(fā)體驗(yàn)
項(xiàng)目地址: https://github.com/ZYF99/RxArcFace
關(guān)于虹軟人臉識(shí)別SDK
虹軟人臉人臉識(shí)別SDK:ArcFace 離線SDK姐呐,包含人臉檢測(cè)衍慎、性別檢測(cè)、年齡檢測(cè)皮钠、人臉識(shí)別稳捆、圖像質(zhì)量檢測(cè)、RGB活體檢測(cè)麦轰、IR活體檢測(cè)等能力乔夯,初次使用時(shí)需聯(lián)網(wǎng)激活,激活后即可在本地?zé)o網(wǎng)絡(luò)環(huán)境下工作款侵,可根據(jù)具體的業(yè)務(wù)需求結(jié)合人臉識(shí)別SDK靈活地進(jìn)行應(yīng)用層開發(fā)末荐。
基礎(chǔ)版本暫不支持
圖像質(zhì)量檢測(cè)
以及離線激活
;
0. 引子
人臉識(shí)別在當(dāng)今已不是稀奇的功能新锈,許多業(yè)務(wù)場(chǎng)景都能看到人臉識(shí)別的影子甲脏。作為移動(dòng)應(yīng)用開發(fā)者,選擇接入合適的SDK能為我們帶來更高效的開發(fā)體驗(yàn)妹笆;本文首先將以虹軟人臉識(shí)別SDK基礎(chǔ)方法為切入點(diǎn)逐漸探討块请,但官方SDK未免過于繁瑣,所以文章帶領(lǐng)讀者將其封裝拳缠,基于官方方法打造自己的高可用墩新,多場(chǎng)景可用的Util,使人臉開發(fā)無需繁瑣的過程即可輕松接入窟坐。
SDK準(zhǔn)備工作請(qǐng)參考
https://ai.arcsoft.com.cn/manual/docs#/139
https://ai.arcsoft.com.cn/manual/docs#/140 只需看3.1
本文將不再累述
1. 方法介紹(摘自 虹軟安卓接入詳情)
1.activeOnline
功能描述
用于在線激活SDK海渊。
方法
int activeOnline(Context context, String appId, String sdkKey)
初次使用SDK時(shí)需要對(duì)SDK先進(jìn)行激活绵疲,激活后無需重復(fù)調(diào)用;
調(diào)用此接口時(shí)必須為聯(lián)網(wǎng)狀態(tài)臣疑,激活成功后即可離線使用盔憨;
參數(shù)說明
參數(shù) | 類型 | 描述 |
---|---|---|
context | in | 上下文信息 |
appId | in | 官網(wǎng)獲取的APP_ID |
sdkKey | in | 官網(wǎng)獲取的SDK_KEY |
返回值
成功返回ErrorInfo.MOK
、ErrorInfo.MERR_ASF_ALREADY_ACTIVATED
讯沈,失敗詳見 錯(cuò)誤碼列表般渡。
2.init
功能描述
初始化引擎。
該接口至關(guān)重要芙盘,清楚的了解該接口參數(shù)的意義,可以避免一些問題以及對(duì)項(xiàng)目的設(shè)計(jì)都有一定的幫助脸秽。
方法
int init(
Context context,
DetectMode detectMode,
DetectFaceOrientPriority detectFaceOrientPriority,
int detectFaceScaleVal,
int detectFaceMaxNum,
int combinedMask
)
參數(shù)說明
參數(shù) | 類型 | 描述 |
---|---|---|
context | in | 上下文信息 |
detectMode | in | VIDEO模式:處理連續(xù)幀的圖像數(shù)據(jù) IMAGE模式:處理單張的圖像數(shù)據(jù) |
detectFaceOrientPriority | in | 人臉檢測(cè)角度儒老,推薦單一角度檢測(cè); |
detectFaceScaleVal | in | 識(shí)別的最小人臉比例(圖片長邊與人臉框長邊的比值) VIDEO模式取值范圍[2,32]记餐,推薦值為16 IMAGE模式取值范圍[2,32]驮樊,推薦值為32 |
detectFaceMaxNum | in | 最大需要檢測(cè)的人臉個(gè)數(shù),取值范圍[1,50] |
combinedMask | in | 需要啟用的功能組合片酝,可多選 |
3.detectFaces(傳入分離的圖像信息數(shù)據(jù))
方法
int detectFaces(
byte[] data,
int width,
int height,
int format,
List<FaceInfo> faceInfoList
)
參數(shù)說明
參數(shù) | 類型 | 描述 |
---|---|---|
data | in | 圖像數(shù)據(jù) |
width | in | 圖像寬度囚衔,為4的倍數(shù) |
height | in | 圖像高度,在NV21格式下要求為2的倍數(shù)雕沿; BGR24/GRAY/DEPTH_U16格式無限制练湿; |
format | in | 圖像的顏色格式 |
faceInfoList | out | 檢測(cè)到的人臉信息 |
返回值
成功返回ErrorInfo.MOK
,失敗詳見 錯(cuò)誤碼列表审轮。
detectFaceMaxNum
參數(shù)的設(shè)置肥哎,對(duì)能否檢測(cè)到人臉以及檢測(cè)到幾張人臉都有決定性的作用。
4.process(傳入分離的圖像信息數(shù)據(jù))
方法
int process(
byte[] data,
int width,
int height,
int format,
List<FaceInfo> faceInfoList,
int combinedMask
)
參數(shù)說明
參數(shù) | 類型 | 描述 |
---|---|---|
data | in | 圖像數(shù)據(jù) |
width | in | 圖片寬度疾渣,為4的倍數(shù) |
height | in | 圖片高度篡诽,在NV21格式下要求為2的倍數(shù) BGR24格式無限制 |
format | in | 支持NV21/BGR24 |
faceInfoList | in | 人臉信息列表 |
combinedMask | in | 檢測(cè)的屬性(ASF_AGE、ASF_GENDER榴捡、 ASF_FACE3DANGLE杈女、ASF_LIVENESS),支持多選 檢測(cè)的屬性須在引擎初始化接口的combinedMask參數(shù)中啟用 |
重要參數(shù)說明
- combinedMask
process接口中支持檢測(cè)
ASF_AGE
吊圾、ASF_GENDER
达椰、ASF_FACE3DANGLE
、ASF_LIVENESS
四種屬性项乒,但是想檢測(cè)這些屬性砰碴,必須在初始化引擎接口中對(duì)想要檢測(cè)的屬性進(jìn)行初始化。
關(guān)于初始化接口中combinedMask
和process
接口中combinedMask
參數(shù)之間的關(guān)系板丽,舉例進(jìn)行詳細(xì)說明呈枉,如下圖所示:
-
process
接口中combinedMask
支持傳入的屬性有ASF_AGE
趁尼、ASF_GENDER
、ASF_FACE3DANGLE
猖辫、ASF_LIVENESS
酥泞。 - 初始化中傳入了
ASF_FACE_DETECT
、ASF_FACERECOGNITION
啃憎、ASF_AGE
芝囤、ASF_LIVENESS
屬性。 - process可傳入屬性組合只有
ASF_AGE
辛萍、ASF_LIVENESS
悯姊、ASF_AGE | ASF_LIVENESS
。
返回值
成功返回ErrorInfo.MOK
贩毕,失敗詳見 錯(cuò)誤碼列表悯许。
5.extractFaceFeature(傳入分離的圖像信息數(shù)據(jù))
方法
int extractFaceFeature(
byte[] data,
int width,
int height,
int format,
FaceInfo faceInfo,
FaceFeature feature
)
參數(shù)說明
參數(shù) | 類型 | 描述 |
---|---|---|
data | in | 圖像數(shù)據(jù) |
width | in | 圖片寬度,為4的倍數(shù) |
height | in | 圖片高度辉阶,在NV21格式下要求為2的倍數(shù)先壕; BGR24/GRAY/DEPTH_U16格式無限制; |
format | in | 圖像的顏色格式 |
faceInfo | in | 人臉信息(人臉框谆甜、人臉角度) |
feature | out | 提取到的人臉特征信息 |
返回值
成功返回ErrorInfo.MOK
垃僚,失敗詳見 錯(cuò)誤碼列表。
6.compareFaceFeature(可選擇比對(duì)模型)
方法
int compareFaceFeature (
FaceFeature feature1,
FaceFeature feature2,
CompareModel compareModel,
FaceSimilar faceSimilar
)
參數(shù)說明
參數(shù) | 類型 | 描述 |
---|---|---|
feature1 | in | 人臉特征 |
feature2 | in | 人臉特征 |
compareModel | in | 比對(duì)模型 |
faceSimilar | out | 比對(duì)相似度 |
返回值
成功返回ErrorInfo.MOK
规辱,失敗詳見 錯(cuò)誤碼列表谆棺。
使用 RxArcFace
- clone項(xiàng)目到本地 https://github.com/ZYF99/RxArcFace.git
-
在需要使用的項(xiàng)目中 引入RxArcFace的Module
-
選中剛才克隆下的項(xiàng)目文件夾中的RxArcFaceModule
- 在自己項(xiàng)目的app的build.gradle中添加依賴
implementation project(path: ':RxArcFacelibrary')
添加權(quán)限
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA"/>
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.autofocus"/>
將需要匹配的數(shù)據(jù)類實(shí)現(xiàn) IFaceDetect
接口
data class Person(
val id: Long? = null,
val name: String? = null,
val avatar: String? = null, //添加avatar屬性
var faceCode: String? = null //添加faceCode可變屬性
) : IFaceDetect {
override fun getFaceCodeJson(): String? {
return faceCode
}
override fun getAvatarUrl(): String? {
return avatar
}
override fun bindFaceCode(faceCodeJson: String?) {
faceCode = faceCodeJson
}
}
也許你會(huì)問為什么我還需要自己添加faceCode屬性和avatar屬性呢?
其實(shí)并不是需要你自己去添加罕袋,往往我們?cè)诮尤肴四樧R(shí)別功能時(shí)包券,我們?cè)缇陀辛俗约旱臄?shù)據(jù)類,這跟數(shù)據(jù)類很可能是后端返回給我們的炫贤,而我們有時(shí)候很難決定后端會(huì)給我們什么樣的數(shù)據(jù)溅固, faceCode
和 avatar
只是說我們的數(shù)據(jù)類必須有這兩種東西(一個(gè)人臉特征,一個(gè)頭像)兰珍,它們可以是你之前就有的侍郭,也可以是你后來添加的,假如后端本身就返回給我們一個(gè) 屬性作為人臉特征掠河,那么我們直接在 getFaceCodeJson
返回它就好亮元,avatar
同理。
攝像頭采集圖像
private var camera: Camera? = null
//初始化相機(jī)唠摹、surfaceView
private fun initCameraOrigin(surfaceView: SurfaceView) {
surfaceView.holder.addCallback(object : SurfaceHolder.Callback {
override fun surfaceCreated(holder: SurfaceHolder) {
//surface創(chuàng)建時(shí)執(zhí)行
if (camera == null) {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
camera = openCamera(this@MainActivity) { data, camera, resWidth, resHeight ->
if (data != null && data.size > 1) {
//TODO 人臉匹配
}
}
}
}
//調(diào)整攝像頭方向
camera?.let { setCameraDisplayOrientation(this@MainActivity, it) }
//開始預(yù)覽
holder.let { camera?.startPreview(it) }
}
override fun surfaceChanged(
holder: SurfaceHolder,
format: Int,
width: Int,
height: Int
) {
}
override fun surfaceDestroyed(holder: SurfaceHolder) {
camera.releaseCamera()
camera = null
}
})
}
override fun onPause() {
camera?.setPreviewCallback(null)
camera.releaseCamera()//釋放相機(jī)資源
camera = null
super.onPause()
}
override fun onDestroy() {
camera?.setPreviewCallback(null)
camera.releaseCamera()//釋放相機(jī)資源
camera = null
super.onDestroy()
}
使用人臉識(shí)別匹配
if (data != null && data.size > 1) {
matchHumanFaceListByArcSoft(
data = data,
width = resWidth,
height = resHeight,
humanList = listOfPerson,
doOnMatchedHuman = { matchedPerson ->
Toast.makeText(
this@MainActivity,
"匹配到${matchedPerson.name}",
Toast.LENGTH_SHORT
).show()
isFaceDetecting = false
},
doOnMatchMissing = {
Toast.makeText(
this@MainActivity,
"沒匹配到人爆捞,正在錄入",
Toast.LENGTH_SHORT
).show()
//為一個(gè)新的人綁定人臉數(shù)據(jù)
bindFaceCodeByByteArray(
Person(name = "帥哥"),
data,
resWidth,
resHeight
).doOnSuccess {
//往當(dāng)前列表加入新注冊(cè)的人
listOfPerson.add(it)
Toast.makeText(
this@MainActivity,
"錄入成功",
Toast.LENGTH_SHORT
).show()
isFaceDetecting = false
}.subscribe()
},
doFinally = { }
)
}
完整的Activity代碼
package com.lxh.rxarcface
import android.hardware.Camera
import android.os.Build
import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import android.util.Log
import android.view.SurfaceHolder
import android.view.SurfaceView
import android.widget.Toast
import com.lxh.rxarcfacelibrary.bindFaceCodeByByteArray
import com.lxh.rxarcfacelibrary.initArcSoftEngine
import com.lxh.rxarcfacelibrary.isFaceDetecting
import com.lxh.rxarcfacelibrary.matchHumanFaceListByArcSoft
class MainActivity : AppCompatActivity() {
private var camera: Camera? = null
private var listOfPerson: MutableList<Person> = mutableListOf()
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
//初始化人臉識(shí)別引擎
initArcSoftEngine(
this,
"輸入官網(wǎng)申請(qǐng)的appid",
"輸入官網(wǎng)申請(qǐng)的"
)
//初始化攝像頭
initCameraOrigin(findViewById(R.id.surface_view))
}
//初始化相機(jī)、surfaceView
private fun initCameraOrigin(surfaceView: SurfaceView) {
surfaceView.holder.addCallback(object : SurfaceHolder.Callback {
override fun surfaceCreated(holder: SurfaceHolder) {
//surface創(chuàng)建時(shí)執(zhí)行
if (camera == null) {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
camera =
openCamera(this@MainActivity) { data, camera, resWidth, resHeight ->
if (data != null && data.size > 1) {
matchHumanFaceListByArcSoft(
data = data,
width = resWidth,
height = resHeight,
humanList = listOfPerson,
doOnMatchedHuman = { matchedPerson ->
Toast.makeText(
this@MainActivity,
"匹配到${matchedPerson.name}",
Toast.LENGTH_SHORT
).show()
isFaceDetecting = false
},
doOnMatchMissing = {
Toast.makeText(
this@MainActivity,
"沒匹配到人勾拉,正在錄入",
Toast.LENGTH_SHORT
).show()
//為一個(gè)新的人綁定人臉數(shù)據(jù)
bindFaceCodeByByteArray(
Person(name = "帥哥"),
data,
resWidth,
resHeight
).doOnSuccess {
//往當(dāng)前列表加入新注冊(cè)的人
listOfPerson.add(it)
Toast.makeText(
this@MainActivity,
"錄入成功",
Toast.LENGTH_SHORT
).show()
isFaceDetecting = false
}.subscribe()
},
doFinally = { }
)
}
}
}
}
//調(diào)整攝像頭方向
camera?.let { setCameraDisplayOrientation(this@MainActivity, it) }
//開始預(yù)覽
holder.let { camera?.startPreview(it) }
}
override fun surfaceChanged(
holder: SurfaceHolder,
format: Int,
width: Int,
height: Int
) {
}
override fun surfaceDestroyed(holder: SurfaceHolder) {
camera.releaseCamera()
camera = null
}
})
}
override fun onPause() {
camera?.setPreviewCallback(null)
camera.releaseCamera()//釋放相機(jī)資源
camera = null
super.onPause()
}
override fun onDestroy() {
camera?.setPreviewCallback(null)
camera.releaseCamera()//釋放相機(jī)資源
camera = null
super.onDestroy()
}
}
注意:Demo沒有檢查相機(jī)權(quán)限煮甥,自行在設(shè)置去打開權(quán)限或者自己添加權(quán)限檢測(cè)
封裝介紹
直接SDK的使用請(qǐng)參考官方Demo盗温,在注冊(cè)SDK服務(wù)時(shí)下載即可。這里不介紹Demo使用成肘,如果需要直接參考官方寫的Demo即可卖局,另外的,用我最后的封裝會(huì)比直接使用官方SDK簡單得多
1.引入依賴
https://ai.arcsoft.com.cn/manual/docs#/140: 請(qǐng)確保已按照3.1引入虹軟依賴配置
//RxJava2
implementation "io.reactivex.rxjava2:rxjava:2.2.13"
implementation "io.reactivex.rxjava2:rxkotlin:2.3.0"
//Json Serializer(工具類中使用到了Moshi作為序列化工具双霍,可自行替換為其他工具)
implementation("com.squareup.moshi:moshi-kotlin:1.9.2")
kapt("com.squareup.moshi:moshi-kotlin-codegen:1.9.2")
//Glide(工具類中使用到了Glide作為序列化工具砚偶,可自行替換為其他工具)
implementation "com.github.bumptech.glide:glide:4.10.0"
//RxJava2
implementation "io.reactivex.rxjava2:rxjava:2.2.13"
implementation 'io.reactivex.rxjava2:rxandroid:2.1.1'
implementation "io.reactivex.rxjava2:rxkotlin:2.3.0"
2.實(shí)現(xiàn)工具類
定義全局變量
//(虹軟)判斷為同一人的閾值,大于此值即可判斷為同一人
const val ARC_SOFT_VALUE_MATCHED = 0.8f
private var context: Context? = null
//虹軟人臉初始化分析引擎(用于整個(gè)APP種需要解析人臉圖片為虹軟人臉特征數(shù)據(jù)所使用的引擎)
//使用兩個(gè)引擎的原因是:我們從網(wǎng)絡(luò)或者自己的服務(wù)器獲取的人臉照片人臉方向一定正常洒闸,但Android本身Camera獲取到的圖像旋轉(zhuǎn)角度不定染坯,初始化時(shí)又必須給一個(gè)旋轉(zhuǎn)角度
private val faceDetectEngine = FaceEngine()
//虹軟人臉識(shí)別引擎(用于人臉識(shí)別使用的引擎)
private val faceEngine = FaceEngine()
//上次檢測(cè)人臉的時(shí)間戳
var lastFaceDetectingTime = 0L
//是否正在檢測(cè)(很重要,若同一時(shí)間多個(gè)圖片交給SDK檢測(cè)丘逸,C++底層將會(huì)內(nèi)存溢出)
var isFaceDetecting = false
初始化
/**
* (虹軟)初始化人臉識(shí)別引擎
* */
fun initArcSoftEngine(
contextTemp: Context,
arcAppId: String, //在官網(wǎng)申請(qǐng)的 APPID
arcSdkKey: String //在官網(wǎng)申請(qǐng)的 APPKEY
) {
context = contextTemp
val activeCode = FaceEngine.activeOnline(
context,
arcAppId,
arcSdkKey
)
Log.d("激活虹軟,結(jié)果碼:", activeCode.toString())
//人臉識(shí)別引擎
val faceEngineCode = faceEngine.init(
context,
DetectMode.ASF_DETECT_MODE_IMAGE, //檢測(cè)模式单鹿,可選 ASF_DETECT_MODE_VIDEO、ASF_DETECT_MODE_IMAGE
DetectFaceOrientPriority.ASF_OP_270_ONLY, //檢測(cè)角度鸣个,不清楚角度可將模式改為VIDEO模式并將角度設(shè)置為 ASF_OP_ALL_OUT(全角度檢測(cè))
16,
6,
FaceEngine.ASF_FACE_RECOGNITION or FaceEngine.ASF_AGE or FaceEngine.ASF_FACE_DETECT or FaceEngine.ASF_GENDER or FaceEngine.ASF_FACE3DANGLE
)
//人臉圖片分析引擎
faceDetectEngine.init(
context,
DetectMode.ASF_DETECT_MODE_VIDEO,
DetectFaceOrientPriority.ASF_OP_ALL_OUT,
16,
6,
FaceEngine.ASF_FACE_RECOGNITION or FaceEngine.ASF_AGE or FaceEngine.ASF_FACE_DETECT or FaceEngine.ASF_GENDER or FaceEngine.ASF_FACE3DANGLE
)
Log.d("FaceEngine init", "initEngine: init $faceEngineCode")
when (faceEngineCode) {
ErrorInfo.MOK,
ErrorInfo.MERR_ASF_ALREADY_ACTIVATED -> {
}
else -> showToast("初始化虹軟人臉識(shí)別錯(cuò)誤,Code${faceEngineCode}")
}
}
接下來我們需要定義一個(gè)規(guī)范布朦,通過上面的API介紹我們知道識(shí)別其實(shí)是通過
compareFaceFeature()
方法比較兩個(gè)FaceFeature
對(duì)象囤萤,所以我們需要比較的數(shù)據(jù)類比如 一個(gè)data class Person
就需要里面有一個(gè)類型為FaceFeature
屬性。但我們可能擁有多個(gè)這樣的 class 是趴,比如Student
涛舍、Teacher
,他們都是毫無關(guān)系的數(shù)據(jù)類唆途,于是我用一個(gè)接口來要求每個(gè)需要人臉識(shí)別的類去實(shí)現(xiàn)富雅。
定義識(shí)別實(shí)體類的接口
/**
* 作為人臉識(shí)別數(shù)據(jù)類必須實(shí)現(xiàn)的接口
* */
interface IFaceDetect {
//獲取特征碼Json
fun getFaceCodeJson(): String?
//獲取頭像URL
fun getAvatarUrl(): String?
//綁定特征碼
fun bindFaceCode(faceCodeJson: String?)
}
通過圖片byte數(shù)組獲取FaceFeature
/**
* (虹軟)通過人員人臉圖片byteArray,為人員綁定上特征碼
* */
@Synchronized
fun <T : IFaceDetect> bindFaceCodeByByteArray(
person: T,
imageByteArray: ByteArray,
imageWidth: Int,
imageHeight: Int
): Single<T> {
return getArcFaceCodeByImageData(
imageByteArray,
imageWidth,
imageHeight
).flatMap {
Single.just(person.apply {
bindFaceCode(it)
})
}.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
}
/**
* 通過圖片數(shù)據(jù)加載為ArcFaceCode
* */
private fun getArcFaceCodeByImageData(
imageData: ByteArray,
imageWidth: Int,
imageHeight: Int
): Single<String> {
return Single.create { emitter ->
val detectStartTime = System.currentTimeMillis()
//人臉列表
val faceInfoList: List<FaceInfo> = mutableListOf()
//?臉檢測(cè)
val detectCode = faceDetectEngine.detectFaces(
imageData,
imageWidth,
imageHeight,
FaceEngine.CP_PAF_NV21,
faceInfoList
)
if (detectCode == 0) {
//人臉剖析
val faceProcessCode = faceDetectEngine.process(
imageData,
imageWidth,
imageHeight,
FaceEngine.CP_PAF_NV21,
faceInfoList,
FaceEngine.ASF_AGE or FaceEngine.ASF_GENDER or FaceEngine.ASF_FACE3DANGLE
)
//剖析成功
if (faceProcessCode == ErrorInfo.MOK && faceInfoList.isNotEmpty()) {
//識(shí)別到的人臉特征
val currentFaceFeature = FaceFeature()
//人臉特征分析
val res = faceDetectEngine.extractFaceFeature(
imageData,
imageWidth,
imageHeight,
FaceEngine.CP_PAF_NV21,
faceInfoList[0],
currentFaceFeature
)
//人臉特征分析成功
if (res == ErrorInfo.MOK) {
Log.d(
"!!人臉轉(zhuǎn)換耗時(shí)",
"${System.currentTimeMillis() - detectStartTime}"
)
Schedulers.io().scheduleDirect {
emitter.onSuccess(globalMoshi.toJson(currentFaceFeature))
}
}
} else {
Log.d("ARCFACE", "face process finished , code is $faceProcessCode")
Schedulers.io().scheduleDirect {
emitter.onSuccess("")
}
}
} else {
Log.d(
"ARCFACE",
"face detection finished, code is " + detectCode + ", face num is " + faceInfoList.size
)
Schedulers.io().scheduleDirect {
emitter.onSuccess("")
}
}
}
}
通過圖片url獲取FaceFeature
/**
* (虹軟)通過人員人臉圖片url肛搬,獲取帶特征碼人員列表
* */
@Synchronized
fun <T : IFaceDetect> detectPersonAvatarAndBindFaceFeatureCodeByArcSoft(
personListTemp: List<T>?
): Single<List<T>> {
return Observable.fromIterable(personListTemp)
.flatMapSingle { person ->
getArcFaceCodeByPicUrl(person.getAvatarUrl())
.map { arcFaceCodeJson ->
person.bindFaceCode(arcFaceCodeJson)
person
}
}
.toList()
.subscribeOn(Schedulers.io())
}
/**
* 通過照片加載為ArcFaceCode
* */
private fun getArcFaceCodeByPicUrl(
picUrl: String?
): Single<String> {
return Single.create { emitter ->
Glide.with(context!!)
.asBitmap()
.load(picUrl)
.listener(object : RequestListener<Bitmap> {
override fun onLoadFailed(
e: GlideException?,
model: Any?,
target: Target<Bitmap>?,
isFirstResource: Boolean
): Boolean {
emitter.onSuccess("")
return false
}
override fun onResourceReady(
resource: Bitmap?,
model: Any?,
target: Target<Bitmap>?,
dataSource: DataSource?,
isFirstResource: Boolean
): Boolean {
return false
}
})
.into(object : SimpleTarget<Bitmap>() {
@Synchronized
override fun onResourceReady(
bitMap: Bitmap,
transition: Transition<in Bitmap>?
) {
val detectStartTime = System.currentTimeMillis()
//人臉列表
val faceInfoList: List<FaceInfo> = mutableListOf()
val faceByteArray = getPixelsBGR(bitMap)
//?臉檢測(cè)
val detectCode = faceDetectEngine.detectFaces(
faceByteArray,
bitMap.width,
bitMap.height,
FaceEngine.CP_PAF_BGR24,
faceInfoList
)
if (detectCode == 0) {
//人臉剖析
val faceProcessCode = faceDetectEngine.process(
faceByteArray,
bitMap.width,
bitMap.height,
FaceEngine.CP_PAF_BGR24,
faceInfoList,
FaceEngine.ASF_AGE or FaceEngine.ASF_GENDER or FaceEngine.ASF_FACE3DANGLE
)
//剖析成功
if (faceProcessCode == ErrorInfo.MOK && faceInfoList.isNotEmpty()) {
//識(shí)別到的人臉特征
val currentFaceFeature = FaceFeature()
//人臉特征分析
val res = faceDetectEngine.extractFaceFeature(
faceByteArray,
bitMap.width,
bitMap.height,
FaceEngine.CP_PAF_BGR24,
faceInfoList[0],
currentFaceFeature
)
//人臉特征分析成功
if (res == ErrorInfo.MOK) {
Log.d(
"!!人臉轉(zhuǎn)換耗時(shí)",
"${System.currentTimeMillis() - detectStartTime}"
)
Schedulers.io().scheduleDirect {
emitter.onSuccess(globalMoshi.toJson(currentFaceFeature))
}
}
} else {
Log.d("ARCFACE", "face process finished , code is $faceProcessCode")
Schedulers.io().scheduleDirect {
emitter.onSuccess("")
}
}
} else {
Log.d(
"ARCFACE",
"face detection finished, code is " + detectCode + ", face num is " + faceInfoList.size
)
Schedulers.io().scheduleDirect {
emitter.onSuccess("")
}
}
}
})
}
}
為實(shí)體數(shù)據(jù)綁定人臉特征數(shù)據(jù)
/**
* (虹軟)通過人員人臉圖片没佑,獲取帶特征碼人員列表
* */
@Synchronized
fun <T : IFaceDetect> detectPersonAvatarAndBindFaceFeatureCodeByArcSoft(
personListTemp: List<T>?
): Single<List<T>> {
return Observable.fromIterable(personListTemp)
.flatMapSingle { person ->
getArcFaceCodeByPicUrl(person.getAvatarUrl())
.map { arcFaceCodeJson ->
person.bindFaceCode(arcFaceCodeJson)
person
}
}
.toList()
.subscribeOn(Schedulers.io())
}
從列表匹配出一個(gè)人
有了規(guī)范,我們就可以開始識(shí)別了温赔,先寫一個(gè)從列表識(shí)別出一個(gè)人的方法
/**
* (虹軟)通過人臉圖片識(shí)別匹配列表里的人類
* */
/**
* (虹軟)通過人臉圖片識(shí)別匹配列表里的人類
* */
@Synchronized
fun <T : IFaceDetect> matchHumanFaceListByArcSoft(
data: ByteArray,
width: Int,
height: Int,
previewWidth: Int? = null,
previewHeight: Int? = null,
humanList: List<T>,
doOnMatchedHuman: (T) -> Unit,
doOnMatchMissing: (() -> Unit)? = null,
doFinally: (() -> Unit)? = null
) {
if (isFaceDetecting) return
synchronized(faceEngine) {
//Log.d(TAG_ARC_FACE, "當(dāng)前線程:${Thread.currentThread().name}")
//正在檢測(cè)
isFaceDetecting = true
//上次檢測(cè)時(shí)間
lastFaceDetectingTime = System.currentTimeMillis()
//人臉列表
val faceInfoList: List<FaceInfo> = mutableListOf()
//?臉檢測(cè)
val detectCode = faceEngine.detectFaces(
data,
width,
height,
FaceEngine.CP_PAF_NV21,
faceInfoList
)
if (detectCode != 0 || faceInfoList.isEmpty()) {
Log.d(TAG_ARC_FACE, "face detection finished, code is " + detectCode + ", face num is " + faceInfoList.size)
doFinally?.invoke()
isFaceDetecting = false
return
}
//人臉剖析
val faceProcessCode = faceEngine.process(
data,
width,
height,
FaceEngine.CP_PAF_NV21,
faceInfoList,
FaceEngine.ASF_AGE or FaceEngine.ASF_GENDER or FaceEngine.ASF_FACE3DANGLE
)
//剖析失敗
if (faceProcessCode != ErrorInfo.MOK) {
Log.d(TAG_ARC_FACE, "face process finished , code is $faceProcessCode")
doFinally?.invoke()
isFaceDetecting = false
return
}
//previewWidth和previewHeight不為空表示需要人臉在畫面中間
val needAvatarInViewCenter =
if (faceInfoList.isNotEmpty()) {
previewWidth != null
&& previewHeight != null
&& isAvatarInViewCenter(faceInfoList[0].rect, previewWidth, previewHeight)
} else false
//previewWidth和previewHeight為空表示不需要人臉在畫面中間
val doNotNeedAvatarInViewCenter = previewWidth == null && previewHeight == null
when {
(faceInfoList.isNotEmpty() && needAvatarInViewCenter)
|| (faceInfoList.isNotEmpty() && doNotNeedAvatarInViewCenter) -> {
}
else -> {//無人臉蛤奢,退出匹配
doFinally?.invoke()
isFaceDetecting = false
return
}
}
//識(shí)別到的人臉特征
val currentFaceFeature = FaceFeature()
//人臉特征分析
val res = faceEngine.extractFaceFeature(
data,
width,
height,
FaceEngine.CP_PAF_NV21,
faceInfoList[0],
currentFaceFeature
)
//人臉特征分析失敗
if (res != ErrorInfo.MOK) {
doFinally?.invoke()
isFaceDetecting = false
return
}
//進(jìn)行遍歷匹配
val matchedMeetingPerson = humanList.find {
val faceSimilar = FaceSimilar()
val startDetectTime = System.currentTimeMillis()
if (it.getFaceCodeJson() == null || it.getFaceCodeJson()!!.isEmpty()) {
return@find false
}
val compareResult =
faceEngine.compareFaceFeature(
globalMoshi.fromJson(it.getFaceCodeJson()),
currentFaceFeature,
faceSimilar
)
Log.d(TAG_ARC_FACE, "單人匹配耗時(shí): ${System.currentTimeMillis() - startDetectTime}")
if (compareResult == ErrorInfo.MOK) {
Log.d("相似度", faceSimilar.score.toString())
faceSimilar.score > ARC_SOFT_VALUE_MATCHED
} else {
Log.d(TAG_ARC_FACE, "對(duì)比發(fā)生錯(cuò)誤: $compareResult")
false
}
}
if (matchedMeetingPerson == null) {
//匹配到的人為空
doOnMatchMissing?.invoke()
} else {
//匹配到的人
doOnMatchedHuman(matchedMeetingPerson)
}
}
}
匹配單個(gè)人
/**
* (虹軟)通過一個(gè)人臉圖片識(shí)別匹配是否為某個(gè)人類
* */
@Synchronized
fun <T : IFaceDetect> matchHumanFaceSoloByArcSoft(
data: ByteArray,
width: Int,
height: Int,
previewWidth: Int? = null,
previewHeight: Int? = null,
human: T,
doOnMatched: (T) -> Unit,
doOnMatchMissing: (() -> Unit)? = null,
doFinally: (() -> Unit)? = null
) {
matchHumanFaceListByArcSoft(
data = data,
width = width,
height = height,
previewWidth = previewWidth,
previewHeight = previewHeight,
humanList = listOf(human),
doOnMatchedHuman = doOnMatched,
doOnMatchMissing = doOnMatchMissing,
doFinally = doFinally
)
}
判斷人臉是否在預(yù)覽View的中間
/**
* 判斷人臉是否在View的中間
* */
fun isAvatarInViewCenter(rect: Rect, previewWidth: Int, previewHeight: Int): Boolean {
try {
val minSX = previewHeight / 10f
val minZY = kotlin.math.abs(previewWidth - previewHeight) / 2 + minSX
val isLeft = kotlin.math.abs(rect.left) > minZY
val isTop = kotlin.math.abs(rect.top) > minSX
val isRight = kotlin.math.abs(rect.left) + rect.width() < (previewWidth - minZY)
val isBottom = kotlin.math.abs(rect.top) + rect.height() < (previewHeight - minSX)
if (isLeft && isTop && isRight && isBottom) return true
} catch (e: Exception) {
Log.e("ARCFACE", e.localizedMessage)
}
return false
}
銷毀引擎
/**
* 銷毀人臉檢測(cè)對(duì)象
* */
fun unInitArcFaceEngine() {
faceEngine.unInit()
}
/**
* 銷毀圖片分析對(duì)象
* */
fun unInitArcFaceDetectEngine() {
faceDetectEngine.unInit()
}
獲取BGR像素的工具
/**
* 提取圖像中的BGR像素
* @param image
* @return
*/
fun getPixelsBGR(image: Bitmap): ByteArray? {
// calculate how many bytes our image consists of
val bytes = image.byteCount
val buffer = ByteBuffer.allocate(bytes) // Create a new buffer
image.copyPixelsToBuffer(buffer) // Move the byte data to the buffer
val temp = buffer.array() // Get the underlying array containing the data.
val pixels = ByteArray(temp.size / 4 * 3) // Allocate for BGR
// Copy pixels into place
for (i in 0 until temp.size / 4) {
pixels[i * 3] = temp[i * 4 + 2] //B
pixels[i * 3 + 1] = temp[i * 4 + 1] //G
pixels[i * 3 + 2] = temp[i * 4] //R
}
return pixels
}
關(guān)于上面用到的序列化,我將序列化工具的代碼也貼出來吧陶贼,方便大家直接copy使用
序列化的擴(kuò)展工具(Moshi的擴(kuò)展方法啤贩,ModelUtil)
import com.squareup.moshi.JsonAdapter
import com.squareup.moshi.Moshi
import com.squareup.moshi.Types
import java.lang.reflect.Type
inline fun <reified T> String?.fromJson(moshi: Moshi = globalMoshi): T? =
this?.let { ModelUtil.fromJson(this, T::class.java, moshi = moshi) }
inline fun <reified T> T?.toJson(moshi: Moshi = globalMoshi): String =
ModelUtil.toJson(this, T::class.java, moshi = moshi)
inline fun <reified T> Moshi.fromJson(json: String?): T? =
json?.let { ModelUtil.fromJson(json, T::class.java, moshi = this) }
inline fun <reified T> Moshi.toJson(t: T?): String =
ModelUtil.toJson(t, T::class.java, moshi = this)
inline fun <reified T> List<T>.listToJson(): String =
ModelUtil.listToJson(this, T::class.java)
inline fun <reified T> String.jsonToList(): List<T>? =
ModelUtil.jsonToList(this, T::class.java)
object ModelUtil {
inline fun <reified S, reified T> copyModel(source: S): T? {
return fromJson(
toJson(
any = source,
classOfT = S::class.java
), T::class.java
)
}
fun <T> toJson(any: T?, classOfT: Class<T>, moshi: Moshi = globalMoshi): String {
return moshi.adapter(classOfT).toJson(any)
}
fun <T> fromJson(json: String, classOfT: Class<T>, moshi: Moshi = globalMoshi): T? {
return moshi.adapter(classOfT).lenient().fromJson(json)
}
fun <T> fromJson(json: String, typeOfT: Type, moshi: Moshi = globalMoshi): T? {
return moshi.adapter<T>(typeOfT).fromJson(json)
}
fun <T> listToJson(list: List<T>?, classOfT: Class<T>, moshi: Moshi = globalMoshi): String {
val type = Types.newParameterizedType(List::class.java, classOfT)
val adapter: JsonAdapter<List<T>> = moshi.adapter(type)
return adapter.toJson(list)
}
fun <T> jsonToList(json: String, classOfT: Class<T>, moshi: Moshi = globalMoshi): List<T>? {
val type = Types.newParameterizedType(List::class.java, classOfT)
val adapter = moshi.adapter<List<T>>(type)
return adapter.fromJson(json)
}
}
相機(jī)的擴(kuò)展工具
import android.app.Activity
import android.content.Context
import android.content.res.Configuration
import android.graphics.ImageFormat
import android.hardware.Camera
import android.hardware.camera2.CameraManager
import android.os.Build
import android.util.Log
import android.view.Surface
import android.view.SurfaceHolder
import androidx.annotation.RequiresApi
import kotlin.math.abs
private var resultWidth = 0
private var resultHeight = 0
var cameraId:Int = 0
/**
* 打開相機(jī)
* */
@RequiresApi(Build.VERSION_CODES.LOLLIPOP)
fun openCamera(
context: Context,
width: Int = 800,
height: Int = 600,
doOnPreviewCallback: (ByteArray?, Camera?, Int, Int) -> Unit
): Camera {
Camera.getNumberOfCameras()
(context.getSystemService(Context.CAMERA_SERVICE) as CameraManager).cameraIdList
cameraId = findFrontFacingCameraID()
val c = Camera.open(cameraId)
initParameters(context, c, width, height)
c.setPreviewCallback { data, camera ->
doOnPreviewCallback(
data,
camera,
resultWidth,
resultHeight
)
}
return c
}
private fun findFrontFacingCameraID(): Int {
var cameraId = -1
// Search for the back facing camera
val numberOfCameras = Camera.getNumberOfCameras()
for (i in 0 until numberOfCameras) {
val info = Camera.CameraInfo()
Camera.getCameraInfo(i, info)
if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
Log.d("CAMERA UTIL", "Camera found ,ID is $i")
cameraId = i
break
}
}
return cameraId
}
/**
* 設(shè)置相機(jī)參數(shù)
* */
fun initParameters(
context: Context,
camera: Camera,
width: Int,
height: Int
) {
//獲取Parameters對(duì)象
val parameters = camera.parameters
val size = getOptimalSize(context, parameters.supportedPreviewSizes, width, height)
parameters?.setPictureSize(size?.width ?: 0, size?.height ?: 0)
parameters?.setPreviewSize(size?.width ?: 0, size?.height ?: 0)
resultWidth = size?.width ?: 0
resultHeight = size?.height ?: 0
//設(shè)置預(yù)覽格式getOptimalSize
parameters?.previewFormat = ImageFormat.NV21
//對(duì)焦
parameters?.focusMode = Camera.Parameters.FOCUS_MODE_FIXED
//給相機(jī)設(shè)置參數(shù)
camera.parameters = parameters
}
/**
* 釋放相機(jī)資源
* */
fun Camera?.releaseCamera() {
if (this != null) {
//停止預(yù)覽
stopPreview()
setPreviewCallback(null)
//釋放相機(jī)資源
release()
}
}
/**
* 獲取相機(jī)旋轉(zhuǎn)角度
* */
fun getDisplayRotation(activity: Activity): Int {
val rotation = activity.windowManager.defaultDisplay
.rotation
when (rotation) {
Surface.ROTATION_0 -> return 0
Surface.ROTATION_90 -> return 90
Surface.ROTATION_180 -> return 180
Surface.ROTATION_270 -> return 270
}
return 90
}
/**
* 設(shè)置預(yù)覽展示角度
* */
fun setCameraDisplayOrientation(
activity: Activity,
camera: Camera
) {
// See android.hardware.Camera.setCameraDisplayOrientation for
// documentation.
val info = Camera.CameraInfo()
Camera.getCameraInfo(cameraId, info)
val degrees = getDisplayRotation(activity)
var result: Int
if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
result = (info.orientation + degrees) % 360
result = (360 - result) % 360 // compensate the mirror
} else { // back-facing
result = (info.orientation - degrees + 360) % 360
}
camera.setDisplayOrientation(result)
}
/**
* 開始相機(jī)預(yù)覽
* */
fun Camera.startPreview(surfaceHolder: SurfaceHolder) {
//根據(jù)所傳入的SurfaceHolder對(duì)象來設(shè)置實(shí)時(shí)預(yù)覽
setPreviewDisplay(surfaceHolder)
startPreview()
}
/**
* 選取與width、height比例最接近的拜秧、設(shè)置支持的size
* @param context
* @param sizes 設(shè)置支持的size序列
* @param w 相機(jī)預(yù)覽視圖的width
* @param h 相機(jī)預(yù)覽視圖的height
* @return
*/
private fun getOptimalSize(
context: Context,
sizes: List<Camera.Size>,
w: Int,
h: Int
): Camera.Size? {
val ASPECT_TOLERANCE = 0.1 //閾值痹屹,用于選取最優(yōu)
var targetRatio = -1.0
val orientation = context.resources.configuration.orientation
//保證targetRatio始終大于1,因?yàn)閟ize.width/size.height始終大于1
if (orientation == Configuration.ORIENTATION_PORTRAIT) {
targetRatio = h.toDouble() / w
} else if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
targetRatio = w.toDouble() / h
}
var optimalSize: Camera.Size? = null
var minDiff = Double.MAX_VALUE
val targetHeight = w.coerceAtMost(h)
for (size in sizes) {
val ratio = size.width.toDouble() / size.height
//若大于了閾值枉氮,則繼續(xù)篩選
if (abs(ratio - targetRatio) > ASPECT_TOLERANCE) {
continue
}
if (abs(size.height - targetHeight) < minDiff) {
optimalSize = size
minDiff = abs(size.height - targetHeight).toDouble()
}
}
//若通過比例沒有獲得最優(yōu)志衍,則通過最小差值獲取最優(yōu)暖庄,保證至少能得到值
if (optimalSize == null) {
minDiff = Double.MAX_VALUE
for (size in sizes) {
if (abs(size.height - targetHeight) < minDiff) {
optimalSize = size
minDiff = abs(size.height - targetHeight).toDouble()
}
}
}
return optimalSize
}