title: Android 通過(guò)opencv實(shí)現(xiàn)人臉識(shí)別揍鸟,追蹤
categories:
- Android
tags: - opencv
- 人臉識(shí)別
- 人臉追蹤
date: 2020-05-29 10:11:41
前言
好了虹蓄,上篇文章講了如何進(jìn)行原生的人臉識(shí)別,檢測(cè)贮尖,追蹤等,相信玩過(guò)的肯定已經(jīng)有了感覺(jué)趁怔,今天我們用opencv來(lái)實(shí)現(xiàn)湿硝,
那么很多人會(huì)問(wèn),原生都實(shí)現(xiàn)了润努,為什么還要接opencv的方式來(lái)實(shí)現(xiàn)关斜,那么下面看完大家應(yīng)該就會(huì)清楚
正文
導(dǎo)入opencv引用
首先,opencv的接入方式有幾種
1.自己編譯需要的模塊生成so庫(kù)铺浇,然后ndk接入
2.接入官網(wǎng)編譯好的ndk,用C/C++來(lái)寫(xiě)功能
3.直接接入官網(wǎng)library sdk痢畜,
今天我們講第三種,后續(xù)研究下載opencv2d轉(zhuǎn)3d,目標(biāo)是實(shí)現(xiàn)所有機(jī)型随抠,前置攝像頭精確出人臉到屏幕的距離
opencv 認(rèn)準(zhǔn)android-sdk.zip下載就好了
下載后解壓
講該圖片中java導(dǎo)入項(xiàng)目中裁着,作為library
更改build
apply plugin: 'com.android.library'
android {
compileSdkVersion 29
buildToolsVersion "29.0.2"
defaultConfig {
minSdkVersion 21
targetSdkVersion 29
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt'
}
}
}
Sdk版本與項(xiàng)目的保持一直即可
然后在app中引用
implementation project(path: ':CVLibrary430')
opencv初始化
我這里是寫(xiě)在onResume 里面需要用initDebug
@Override
public void onResume() {
super.onResume();
//初始化opencv資源
if (!OpenCVLoader.initDebug()) {
Log.d("OpenCV", "Internal OpenCV library not found. Using OpenCV Manager for initialization");
boolean success = OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION, this, openCVLoaderCallback);
if (!success)
Log.e("OpenCV", "Asynchronous initialization failed!");
else
Log.d("OpenCV", "Asynchronous initialization succeeded!");
} else {
Log.d("OpenCV", "OpenCV library found inside package. Using it!");
openCVLoaderCallback.onManagerConnected(LoaderCallbackInterface.SUCCESS);
}
}
然后是監(jiān)聽(tīng)部分的
LoaderCallbackInterface openCVLoaderCallback = new LoaderCallbackInterface() {
@Override
public void onManagerConnected(int status) {
if (status == LoaderCallbackInterface.SUCCESS) {
Log.i(TAG, "OpenCV loaded successfully");
initOpencv();
}
}
@Override
public void onPackageInstall(int operation, InstallCallbackInterface callback) {
Log.d("OpenCV", "onPackageInstall " + operation);
}
};
但是你可能會(huì)發(fā)現(xiàn)你初始化失敗了,此處我們還需要修改app下面的build----android{}內(nèi)
externalNativeBuild {
cmake {
// 我們配置cmake命令
// cppFlags ""
arguments "-DANDROID_STL=c++_shared"
}
}
ndk {
abiFilters 'armeabi-v7a', 'x86'
}
然后這里用cmake但是拱她,不用c++的可能不需要配置
externalNativeBuild {
cmake {
path "src/main/cpp/CMakeLists.txt"
version "3.10.2"
}
}
sourceSets {
main {
// jni.srcDirs = []
jniLibs.srcDirs = ['libs']
}
}
dummy.cpp 是空的二驰,暫時(shí)沒(méi)用到 到這我們可以發(fā)現(xiàn)opencv已經(jīng)初始化成功了,我們可以愉快的開(kāi)始使用了
初始化分類起initOpcv
protected void initOpencv() {
try {
//OpenCV的人臉模型文件: haarcascade_frontalface_alt
InputStream is = getResources().openRawResource(R.raw.haarcascade_frontalface_alt);
File cascadeDir = getDir("cascade", Context.MODE_PRIVATE);
File mCascadeFile = new File(cascadeDir, "haarcascade_frontalface_alt.xml");
FileOutputStream os = new FileOutputStream(mCascadeFile);
byte[] buffer = new byte[4096];
int bytesRead;
while ((bytesRead = is.read(buffer)) != -1) {
os.write(buffer, 0, bytesRead);
}
is.close();
os.close();
// 加載 人臉?lè)诸惼? mFrontalFaceClassifier = new CascadeClassifier(mCascadeFile.getAbsolutePath());
} catch (Exception e) {
Log.e(TAG, e.toString());
}
openCvCameraView.enableView();
}
這里面我們看到用了一個(gè)R.raw.haarcascade_frontalface_alt秉沼, 這里我們可以去剛才下載的opencv包里面找到桶雀,具體位置在第一篇
文章里面可以看到截圖,此處是為了加載分類器唬复,也就是我理解的所謂人臉模型數(shù)據(jù)矗积,用來(lái)對(duì)我們的圖片做對(duì)比
代碼引用
布局代碼需要引用
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:tools="http://schemas.android.com/tools"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:id="@+id/baseView"
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent">
<org.opencv.android.JavaCamera2View
android:id="@+id/openCvCameraView"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:show_fps="true"
/>
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="vertical">
<TextView
android:id="@+id/mFrontalFaceNumber"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textColor="#00ff00"
android:textSize="20sp"/>
<TextView
android:id="@+id/mProfileFaceNumber"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textColor="#00ff00"
android:textSize="20sp"/>
<TextView
android:id="@+id/mCurrentNumber"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textColor="#00ff00"
android:textSize="20sp"/>
<TextView
android:id="@+id/mWaitTime"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textColor="#00ff00"
android:textSize="20sp"/>
</LinearLayout>
</RelativeLayout>
然后初始化布局后,初始攝像頭敞咧,代碼如下
protected void initCamera() {
openCvCameraView.setCameraPermissionGranted(); //該方法用于判斷權(quán)限后棘捣,自行設(shè)置,opencv430版本新改的邏輯
openCvCameraView.setCameraIndex(CameraBridgeViewBase.CAMERA_ID_FRONT); //攝像頭索引 設(shè)置
openCvCameraView.setCvCameraViewListener(this);//監(jiān)聽(tīng)
openCvCameraView.setVisibility(SurfaceView.VISIBLE);
openCvCameraView.setCameraDistance(1.5f); // 設(shè)置焦距
openCvCameraView.setMaxFrameSize(640, 480);//設(shè)置幀大小
}
監(jiān)聽(tīng)是CameraBridgeViewBase.CvCameraViewListener2的方法在回調(diào)中我們可以收到相機(jī)獲取到的數(shù)據(jù)休建,以此來(lái)做處理
首先是start
@Override
public void onCameraViewStarted(int width, int height) {
Log.d("camera","---onCameraViewStarted" + width);
mRgba = new Mat();
mGray = new Mat();
Matlin = new Mat(width, height, CvType.CV_8UC4);
gMatlin = new Mat(width, height, CvType.CV_8UC4);
matWidth = width;
absoluteFaceSize = (int)(height * 0.2);
}
然后記得在stop的時(shí)候釋放乍恐,我們創(chuàng)建的mat(opencv中)對(duì)象
@Override
public void onCameraViewStopped() {
Log.d("camera","---onCameraViewStopped");
mRgba.release();
mGray.release();
Matlin.release();
gMatlin.release();
}
然后是onCameraFrame return 的mat是你畫(huà)面顯示的mat此處的灰度通道十分簡(jiǎn)單评疗,直接個(gè)可以獲取
但需要注意的是mat的方向如果不是正向會(huì)導(dǎo)致檢測(cè)不到人臉,所以此處需要做一個(gè)旋轉(zhuǎn)
@Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba(); //RGBA
mGray = inputFrame.gray(); //單通道灰度圖
int rotation = openCvCameraView.getDisplay().getRotation();
double area = 0;
double width = 0;
MatOfRect frontalFaces = new MatOfRect();
switch (rotation){
case Surface.ROTATION_0:
mRgba = Matutils.rotate(mRgba,90);
mGray = Matutils.rotate(mGray,90);
break;
case Surface.ROTATION_90:
break;
case Surface.ROTATION_180:
mRgba = Matutils.rotate(mRgba,270);
mGray = Matutils.rotate(mGray,270);
break;
case Surface.ROTATION_270:
mRgba = Matutils.rotate(mRgba,180);
mGray = Matutils.rotate(mGray,180);
break;
}
if (mFrontalFaceClassifier != null) {
//這里2個(gè) Size 是用于檢測(cè)人臉的茵烈,越小百匆,檢測(cè)距離越遠(yuǎn),1.1, 5, 2, m65Size, mDefault著四個(gè)參數(shù)可以提高檢測(cè)的準(zhǔn)確率呜投,5表示確認(rèn)五次加匈,具體百度 detectMultiScale 這個(gè)方法
mFrontalFaceClassifier.detectMultiScale(mGray, frontalFaces, 1.1, 2, 2, new Size(absoluteFaceSize, absoluteFaceSize), mDefault);
mFrontalFacesArray = frontalFaces.toArray();
if (mFrontalFacesArray.length > 0) {
area = mFrontalFacesArray[0].area();
width = mFrontalFacesArray[0].width;
Log.i(TAG, "1 : " + mFrontalFacesArray.length);
Log.i(TAG, "1 : " + mFrontalFacesArray[0].size());
Log.i(TAG, "1 : " + mFrontalFacesArray[0].area());
Log.i(TAG, "1 : " + mFrontalFacesArray[0].tl());
Log.i(TAG, "1 : " + mFrontalFacesArray[0].br());
}
mCurrentFaceSize = mFrontalFacesArray.length;
}
if (mCurrentFaceSize > 0){
for (int i = 0; i < mFrontalFacesArray.length; i++) { //用框標(biāo)記
Imgproc.rectangle(mRgba, mFrontalFacesArray[i].tl(), mFrontalFacesArray[i].br(), new Scalar(0, 255, 0, 255), 3);
}
}
//顯示檢測(cè)到的人數(shù)
double distence = (1 + 153 * openCvCameraView.getWidth() / width / 36 ) * 30 * 1.5;
double areas = area/openCvCameraView.getScale();
Log.i(TAG, "openCvCameraView : " + openCvCameraView.getWidth());
Log.i(TAG, "openCvCameraView : " + openCvCameraView.getHeight());
Log.i(TAG, "openCvCameraView : " + openCvCameraView.getScaleX());
Log.i(TAG, "openCvCameraView : " + openCvCameraView.getScaleY());
mHandler.postDelayed(new Runnable() {
@SuppressLint("SetTextI18n")
@Override
public void run() {
mFrontalFaceNumber.setText(areas + "mm2");
mProfileFaceNumber.setText("CameraDistance:" + mRgba.width() + mRgba.height());
mCurrentNumber.setText("distence:" + distence + "mm");
mWaitTime.setText( "");
}
}, 0);
return mRgba;
}
此處也用到一個(gè)旋轉(zhuǎn)的工具類
public static Mat rotate(Mat src, double angele) {
Mat dst = src.clone();
Point center = new Point(src.width() / 2.0, src.height() / 2.0);
Mat affineTrans = Imgproc.getRotationMatrix2D(center, angele, 1.0);
Imgproc.warpAffine(src, dst, affineTrans, dst.size(), Imgproc.INTER_NEAREST);
return dst;
}
然后你就可以跑起來(lái)看效果了
結(jié)語(yǔ)
筆者做這個(gè)目的是做人臉到屏幕距離的檢測(cè),但是這里我們可以獲取到雙額的距離仑荐,但是對(duì)于測(cè)算公式需要用到雕拼,焦距,全畫(huà)幅
有效焦距等释漆,由于沒(méi)有api的提供悲没,獲取不到實(shí)際焦距,而安卓機(jī)型太多所以此處中斷了
后續(xù)會(huì)更新使用arcroe實(shí)現(xiàn)測(cè)距男图,還有opencv的2d模型轉(zhuǎn)3d來(lái)實(shí)現(xiàn)測(cè)算的思路
有問(wèn)題的歡迎評(píng)論或者私聊筆者