一、Camera2的大致了解:
①凛剥、 CameraManager:作為整個(gè)框架的入口侠仇,用于初始化其他類;
②犁珠、CameraCharacteristics:通過(guò)CameraManager獲得逻炊,可提供Camera相關(guān)參數(shù);
③犁享、CameraDevice:通過(guò)CameraManager獲得余素,類似之前的Camera類,可以進(jìn)行預(yù)覽等操作炊昆,例如:設(shè)置顯示預(yù)覽的Surface桨吊。
④、CaptureRequest.Builder:通過(guò)CameraDevice獲得凤巨,可以設(shè)置預(yù)覽的相關(guān)配置视乐。
⑤、CameraCaptureSession:通過(guò)CameraDevice獲得敢茁,控制通過(guò)CaptureRequest.Builder進(jìn)行預(yù)覽佑淀。
使用流程:
CameraManager -> CameraDevice -> CaptureRequest.Builder-> CameraCaptureSession
(原文鏈接:https://blog.csdn.net/ccw0054/article/details/80339208)
下面是對(duì)代碼的分析:
官方代碼:https://github.com/android/camera-samples
以下是一個(gè)demo的代碼:
private void openCamera(int width, int height) {
if (ContextCompat.checkSelfPermission(getActivity(), Manifest.permission.CAMERA)
!= PackageManager.PERMISSION_GRANTED) {
requestCameraPermission();
return;
}
setUpCameraOutputs(width, height);
Log.d("PCC","setup test");
Activity activity = getActivity();
CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);
try {
if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
throw new RuntimeException("Time out waiting to lock camera opening.");
}
Log.d("111","cameraid"+mCameraId);
manager.openCamera("0", mStateCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (InterruptedException e) {
throw new RuntimeException("Interrupted while trying to lock camera opening.", e);
}
}
private void setUpCameraOutputs(int width, int height) {
Activity activity = getActivity();
CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);
try {
for (String cameraId : manager.getCameraIdList()) {
Log.d("PC1",cameraId);
CameraCharacteristics characteristics
= manager.getCameraCharacteristics(cameraId);
// We don't use a front facing camera in this sample.
Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING);
if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT) {
continue;
}
StreamConfigurationMap map = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
if (map == null) {
continue;
}
// For still image captures, we use the largest available size.
Size largest = Collections.max(
Arrays.asList(map.getOutputSizes(ImageFormat.JPEG)),
new CompareSizesByArea());
// Find out if we need to swap dimension to get the preview size relative to sensor
// coordinate.
int displayRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
//noinspection ConstantConditions
mSensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
boolean swappedDimensions = false;
Log.d("PC3",cameraId);
switch (displayRotation) {
case Surface.ROTATION_0:
case Surface.ROTATION_180:
if (mSensorOrientation == 90 || mSensorOrientation == 270) {
swappedDimensions = true;
}
break;
case Surface.ROTATION_90:
case Surface.ROTATION_270:
if (mSensorOrientation == 0 || mSensorOrientation == 180) {
swappedDimensions = true;
}
break;
default:
Log.e(TAG, "Display rotation is invalid: " + displayRotation);
}
Point displaySize = new Point();
activity.getWindowManager().getDefaultDisplay().getSize(displaySize);
int rotatedPreviewWidth = width;
int rotatedPreviewHeight = height;
int maxPreviewWidth = displaySize.x;
int maxPreviewHeight = displaySize.y;
if (swappedDimensions) {
rotatedPreviewWidth = height;
rotatedPreviewHeight = width;
maxPreviewWidth = displaySize.y;
maxPreviewHeight = displaySize.x;
}
if (maxPreviewWidth > MAX_PREVIEW_WIDTH) {
maxPreviewWidth = MAX_PREVIEW_WIDTH;
}
if (maxPreviewHeight > MAX_PREVIEW_HEIGHT) {
maxPreviewHeight = MAX_PREVIEW_HEIGHT;
}
// Danger, W.R.! Attempting to use too large a preview size could exceed the camera
// bus' bandwidth limitation, resulting in gorgeous previews but the storage of
// garbage capture data.
mPreviewSize = chooseOptimalSize(map.getOutputSizes(SurfaceTexture.class),
rotatedPreviewWidth, rotatedPreviewHeight, maxPreviewWidth,
maxPreviewHeight, largest);
// We fit the aspect ratio of TextureView to the size of preview we picked.
int orientation = getResources().getConfiguration().orientation;
if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
autoFitTextureView.setAspectRatio(
mPreviewSize.getWidth(), mPreviewSize.getHeight());
} else {
autoFitTextureView.setAspectRatio(
mPreviewSize.getHeight(), mPreviewSize.getWidth());
}
// Check if the flash is supported.
Boolean available = characteristics.get(CameraCharacteristics.FLASH_INFO_AVAILABLE);
mFlashSupported = available == null ? false : available;
mImageReader = ImageReader.newInstance(largest.getWidth(), largest.getHeight(),
ImageFormat.YUV_420_888, /*maxImages*/5);
mCameraId = cameraId;
return;
}
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (NullPointerException e) {
// Currently an NPE is thrown when the Camera2API is used but not supported on the
// device this code runs.
ErrorDialog.newInstance(getString(R.string.camera_error))
.show(getChildFragmentManager(), FRAGMENT_DIALOG);
}
}
private final CameraDevice.StateCallback mStateCallback = new CameraDevice.StateCallback() {
@Override
public void onOpened(@NonNull CameraDevice cameraDevice) {
// This method is called when the camera is opened. We start camera preview here.
mCameraOpenCloseLock.release();
mCameraDevice = cameraDevice;
//createCameraPreviewSession();
}
@Override
public void onDisconnected(@NonNull CameraDevice cameraDevice) {
mCameraOpenCloseLock.release();
cameraDevice.close();
mCameraDevice = null;
}
@Override
public void onError(@NonNull CameraDevice cameraDevice, int error) {
mCameraOpenCloseLock.release();
cameraDevice.close();
mCameraDevice = null;
Activity activity = getActivity();
if (null != activity) {
activity.finish();
}
}
};
private void createCameraPreviewSession() {
try {
mSurfaceTexture.setDefaultBufferSize(MAX_PREVIEW_WIDTH, MAX_PREVIEW_HEIGHT);
Surface surface = new Surface(mSurfaceTexture);
// This is the output Surface we need to start preview.
// We set up a CaptureRequest.Builder with the output Surface.
mPreviewRequestBuilder
= mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
//mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
// Here, we create a CameraCaptureSession for camera preview.
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == mCameraDevice) {
return;
}
// When the session is ready, we start displaying the preview.
mCaptureSession = cameraCaptureSession;
// Auto focus should be continuous for camera preview.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
setAutoFlash(mPreviewRequestBuilder);
// Finally, we start displaying the camera preview.
mPreviewRequest = mPreviewRequestBuilder.build();
try {
mCaptureSession.setRepeatingRequest(mPreviewRequest,
null, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
@Override
public void onConfigureFailed(
@NonNull CameraCaptureSession cameraCaptureSession) {
showToast("Failed");
}
}, mBackgroundHandler
);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
代碼中我們可以看到openCamera這個(gè)函數(shù)首先會(huì)設(shè)置一個(gè)輸出的大小以使得我們camera得到的data不被拉伸以至于造成變形,然后會(huì) CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);獲取camera的服務(wù)以啟動(dòng)manager.openCamera("0", mStateCallback, mBackgroundHandler)彰檬,這里的mStateCallback就對(duì)應(yīng)device回調(diào)的一個(gè)狀態(tài)伸刃,也是camera2特有的一個(gè)特性,告訴我們攝像頭的狀態(tài)逢倍。camera open之后我們需要?jiǎng)?chuàng)建一個(gè)createCameraPreviewSession活動(dòng)奕枝,在這個(gè)函數(shù)我們創(chuàng)建的surfacetexture與紋理ID進(jìn)行綁定,同時(shí)與mPreviewRequestBuilder進(jìn)行通信瓶堕,使得我們可以正常顯示預(yù)覽數(shù)據(jù)隘道。
二、GLsurfaceview的大致了解:
GLsurfaceview作為surfaceview的補(bǔ)充,加入了EGL的管理(個(gè)人理解:極大的提供了一個(gè)接口谭梗,讓我們方便的使用Opengl ES)忘晤,一提到GLsurfaceview首先會(huì)想到render,在render中往往存在以下代碼:
private GLSurfaceView.Renderer renderer=new GLSurfaceView.Renderer() {
@Override
public void onSurfaceCreated(GL10 gl10, EGLConfig eglConfig) {
}
@Override
public void onSurfaceChanged(GL10 gl10, int width, int height) {
}
@Override
public void onDrawFrame(GL10 gl10) {
}
};
render中分為onSurfaceCreated激捏、onSurfaceChanged和onDrawFrame三大塊设塔,onSurfaceCreated中我們常常會(huì)創(chuàng)建一個(gè)紋理坐標(biāo)和頂點(diǎn)坐標(biāo),而紋理中我們會(huì)使用一個(gè)紋理ID去綁定我們的surfaceTexture远舅,這樣就可以將紋理加載到我們的view中闰蛔,而頂點(diǎn)坐標(biāo)常常是用來(lái)確定頂點(diǎn)的顏色,以及坐標(biāo)的對(duì)應(yīng)图柏。通過(guò)紋理我們可以獲取到一個(gè)對(duì)象序六,通過(guò)頂點(diǎn)我們可以將我們的對(duì)象按照不同需求繪制出來(lái)!在做預(yù)覽的時(shí)候蚤吹,首先需要打開(kāi)攝像頭例诀,也就是一中的 openCamera函數(shù),接著我們會(huì)創(chuàng)建一個(gè)紋理對(duì)象(ID)裁着,創(chuàng)建好之后需要我們加載紋理坐標(biāo)和頂點(diǎn)坐標(biāo)繁涂,加載頂點(diǎn)時(shí)候,我們可以更改著色代碼二驰,對(duì)像素進(jìn)行操作扔罪,這些著色語(yǔ)言位于raw文件下,其中包含demo所寫的三種濾鏡的實(shí)現(xiàn)桶雀。
二值:
邊緣:
9分屏:
github代碼:https://github.com/Frank1481906280/GlCV4Android