Documentation (文檔)
Documentation is generated from header comments using appledoc. To build the documentation, switch to the "Documentation" scheme in Xcode. You should ensure that "APPLEDOC_PATH" (a User-Defined build setting) points to an appledoc binary, available on Github or through Homebrew. It will also build and install a .docset file, which you can view with your favorite documentation tool.
文件由標(biāo)題評論使用appledoc生成讽膏。建立文檔俄精,切換到“文檔”方案在Xcode啼止。你應(yīng)該確定“APPLEDOC_PATH”(一個用戶定義的編譯設(shè)置)指向一個appledoc二進(jìn)制赘方,可以在GitHub上或通過自制烹卒。它還將建設(shè)和安裝 a.docset文件珊拼,你可以用你收藏的文件查看工具盾计。
Performing common tasks(執(zhí)行常見任務(wù))
Filtering live video(濾波視頻的直播)
To filter live video from an iOS device's camera, you can use code like the following:
要從iOS設(shè)備的攝像頭過濾實時視頻搞糕,您可以使用如下代碼:
GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:@"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, viewWidth, viewHeight)];
// Add the view somewhere so it's visible
[videoCamera addTarget:customFilter];
[customFilter addTarget:filteredVideoView];
[videoCamera startCameraCapture];
This sets up a video source coming from the iOS device's back-facing camera, using a preset that tries to capture at 640x480. This video is captured with the interface being in portrait mode, where the landscape-left-mounted camera needs to have its video frames rotated before display. A custom filter, using code from the file CustomShader.fsh, is then set as the target for the video frames from the camera. These filtered video frames are finally displayed onscreen with the help of a UIView subclass that can present the filtered OpenGL ES texture that results from this pipeline.
視頻源來自iOS設(shè)備的后置攝像頭钙皮,使用預(yù)設(shè)蜂科,試圖捕捉在640x480。此視頻被捕獲的界面是在縱向模式短条,其中左側(cè)的左側(cè)安裝的相機(jī)需要在顯示前旋轉(zhuǎn)其視頻幀导匣。自定義過濾器,使用從文件CustomShader.fsh代碼茸时,然后設(shè)置為目標(biāo)贡定,從相機(jī)的視頻幀。這些過濾的視頻幀可都,最后顯示在屏幕上以一個UIView子類可以過濾的OpenGL ES紋理從管道獲得結(jié)果缓待。
The fill mode of the GPUImageView can be altered by setting its fillMode property, so that if the aspect ratio of the source video is different from that of the view, the video will either be stretched, centered with black bars, or zoomed to fill.
通過設(shè)置填充模式性能改變可以設(shè)置GPUImageView的填充模式。因此渠牲,如果源視頻的縱橫比是不同的旋炒。視頻會被拉長,以black bars為中心或者填充放大签杈。
For blending filters and others that take in more than one image, you can create multiple outputs and add a single filter as a target for both of these outputs. The order with which the outputs are added as targets will affect the order in which the input images are blended or otherwise processed.
對于混合過濾器和其他容納多圖像瘫镇,你可以選擇2個輸出和添加一個單獨的過濾器作為目標(biāo)。把輸出添加作為目標(biāo)的順序,會影響深入圖像的混合和其他方式的處理順序铣除。
Also, if you wish to enable microphone audio capture for recording to a movie, you'll need to set the audioEncodingTarget of the camera to be your movie writer, like for the following:
另外谚咬,如果你想使麥克風(fēng)音頻捕捉到movie,你需要設(shè)置相機(jī)的audioEncodingTarget是你的movie 寫入者尚粘,如下列:
videoCamera.audioEncodingTarget = movieWriter;
Capturing and filtering a still photo (捕捉和過濾靜態(tài)圖片)
To capture and filter still photos, you can use a process similar to the one for filtering video. Instead of a GPUImageVideoCamera, you use a GPUImageStillCamera:
要捕捉和過濾靜態(tài)圖片择卦,你可以使用類似于過濾視頻的過程。而不是使用GPUImageVideoCamera郎嫁,你需要使用GPUImageStillCamera:
tillCamera = [[GPUImageStillCamera alloc] init];
stillCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
filter = [[GPUImageGammaFilter alloc] init];
[stillCamera addTarget:filter];
GPUImageView *filterView = (GPUImageView *)self.view;
[filter addTarget:filterView];
[stillCamera startCameraCapture];
This will give you a live, filtered feed of the still camera's preview video. Note that this preview video is only provided on iOS 4.3 and higher, so you may need to set that as your deployment target if you wish to have this functionality.
這將給你一個靜物互捌,過濾靜態(tài)相機(jī)的預(yù)覽視頻。注意行剂,此預(yù)覽視頻必須在iOS4.3 或者以上系統(tǒng)。所以你如果需要這個功能钳降,設(shè)置你的部署目標(biāo)(deployment target)厚宰。
Once you want to capture a photo, you use a callback block like the following:
一旦你想過濾一張圖片,你會使用到一個回調(diào)block遂填。如下所示:
[stillCamera capturePhotoProcessedUpToFilter:filter withCompletionHandler:^(UIImage *processedImage, NSError *error){
NSData *dataForJPEGFile = UIImageJPEGRepresentation(processedImage, 0.8);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSError *error2 = nil;
if (![dataForJPEGFile writeToFile:[documentsDirectory stringByAppendingPathComponent:@"FilteredPhoto.jpg"] options:NSAtomicWrite error:&error2])
{
return;
}
}];
The above code captures a full-size photo processed by the same filter chain used in the preview view and saves that photo to disk as a JPEG in the application's documents directory.
上面的代碼在捕獲預(yù)覽層中使用一個通過同一個過濾鏈處理的全尺寸的圖片铲觉,并把照片作為一個JPEG存儲在項目里的documents directory中
Note that the framework currently can't handle images larger than 2048 pixels wide or high on older devices (those before the iPhone 4S, iPad 2, or Retina iPad) due to texture size limitations. This means that the iPhone 4, whose camera outputs still photos larger than this, won't be able to capture photos like this. A tiling mechanism is being implemented to work around this. All other devices should be able to capture and filter photos using this method.
注意:由于紋理大小的限制。目前框架無法處理老設(shè)備上大于2048像素或更高的圖像(在iPhone 4S, iPad 2, or Retina iPad)吓坚。這意味著iPhone4 的攝像頭輸出的照片比這還大撵幽。無法捕捉到這樣的照片。正在實施一種平鋪機(jī)制來解決這個問題礁击。所有其他設(shè)備都應(yīng)該能夠使用這種方法捕獲和過濾照片盐杂。
小伙伴們閱讀后,請喜歡一下哆窿。文章更新可以提醒到你哦~~~~