Still and Video Media Capture - 靜態(tài)視頻媒體捕獲

文章目錄

  1. 1. Still and Video Media Capture - 靜態(tài)視頻媒體捕獲匹中。
    1. 1.1. Use a Capture Session to Coordinate Data Flow - 使用捕捉會話來協(xié)調(diào)數(shù)據(jù)流
      1. 1.1.1. Configuring a Session - 配置會話
      2. 1.1.2. Monitoring Capture Session State - 監(jiān)視捕獲會話狀態(tài)
    2. 1.2. An AVCaptureDevice Object Represents an Input Device - 一個 AVCaptureDevice 對象代表一個輸入設(shè)備
      1. 1.2.1. Device Characteristics - 設(shè)備特點
      2. 1.2.2. Device Capture Settings
        1. 1.2.2.1. Focus Modes - 聚焦模式
        2. 1.2.2.2. Exposure Modes - 曝光模式
        3. 1.2.2.3. Flash Modes - 閃光模式
        4. 1.2.2.4. Torch Mode - 手電筒模式
        5. 1.2.2.5. Video Stabilization - 視頻穩(wěn)定性
        6. 1.2.2.6. White Balance - 白平衡
        7. 1.2.2.7. Setting Device Orientation - 設(shè)置設(shè)備方向
      3. 1.2.3. Configuring a Device - 配置設(shè)備
      4. 1.2.4. Switching Between Devices - 切換裝置
    3. 1.3. Use Capture Inputs to Add a Capture Device to a Session - 使用捕獲輸入將捕獲設(shè)備添加到會話中
    4. 1.4. Use Capture Outputs to Get Output from a Session - 使用捕獲輸出從會話得到輸出
      1. 1.4.1. Saving to a Movie File - 保存電影文件
        1. 1.4.1.1. Starting a Recording - 開始記錄
        2. 1.4.1.2. Ensuring That the File Was Written Successfully - 確保文件被成功寫入
        3. 1.4.1.3. Adding Metadata to a File - 將元數(shù)據(jù)添加到文件中
        4. 1.4.1.4. Processing Frames of Video - 處理視頻的幀
        5. 1.4.1.5. Performance Considerations for Processing Video - 處理視頻的性能考慮
      2. 1.4.2. Capturing Still Images - 捕獲靜止圖像
        1. 1.4.2.1. Pixel and Encoding Formats - 像素和編碼格式
        2. 1.4.2.2. Capturing an Image - 捕獲圖像
    5. 1.5. Showing the User What’s Being Recorded - 顯示用戶正在被記錄什么
      1. 1.5.1. Video Preview - 視頻預(yù)覽
        1. 1.5.1.1. Video Gravity Modes - 視屏重力模式
        2. 1.5.1.2. Using “Tap to Focus” with a Preview - 使用“點擊焦點”預(yù)覽
      2. 1.5.2. Showing Audio Levels - 顯示音頻等級
    6. 1.6. Putting It All Together: Capturing Video Frames as UIImage Objects - 總而言之:捕獲視頻幀用作 UIImage 對象
      1. 1.6.1. Create and Configure a Capture Session - 創(chuàng)建和配置捕獲會話
      2. 1.6.2. Create and Configure the Device and Device Input - 創(chuàng)建和配置設(shè)備記憶設(shè)備輸入
      3. 1.6.3. Create and Configure the Video Data Output - 創(chuàng)建和配置視頻數(shù)據(jù)輸出
      4. 1.6.4. Implement the Sample Buffer Delegate Method - 實現(xiàn)示例緩沖代理方法
      5. 1.6.5. Starting and Stopping Recording - 啟動和停止錄制
    7. 1.7. High Frame Rate Video Capture - 高幀速率視頻捕獲
      1. 1.7.1. Playback - 播放
      2. 1.7.2. Editing - 編輯
      3. 1.7.3. Export - 出口
      4. 1.7.4. Recording - 錄制

Still and Video Media Capture - 靜態(tài)視頻媒體捕獲。

To manage the capture from a device such as a camera or microphone, you assemble objects to represent inputs and outputs, and use an instance of AVCaptureSession to coordinate the data flow between them. Minimally you need:

  • An instance of AVCaptureDevice to represent the input device, such as a camera or microphone
  • An instance of a concrete subclass of AVCaptureInput to configure the ports from the input device
  • An instance of a concrete subclass of AVCaptureOutput to manage the output to a movie file or still image
  • An instance of AVCaptureSession to coordinate the data flow from the input to the output

從一個設(shè)備豪诲,例如照相機(jī)或者麥克風(fēng)管理捕獲顶捷,組合對象來表示輸入和輸出,并使用 AVCaptureSession 的實例來協(xié)調(diào)它們之間的數(shù)據(jù)流跛溉。你需要最低限度的了解:

  • AVCaptureDevice 的實例表示輸入設(shè)備焊切,比如照相機(jī)或麥克風(fēng)
  • AVCaptureInput 的具體子類的實例從輸入設(shè)備配置端口
  • AVCaptureOutput 的具體子類的實例來管理輸出一個電影文件或者靜態(tài)圖像
  • AVCaptureSession 的實例從輸入到輸出協(xié)調(diào)數(shù)據(jù)流

To show the user a preview of what the camera is recording, you can use an instance of AVCaptureVideoPreviewLayer (a subclass of CALayer).

You can configure multiple inputs and outputs, coordinated by a single session, as shown in Figure 4-1

為了向用戶展示照相機(jī)之前記錄的預(yù)覽扮授,可以使用 AVCaptureVideoPreviewLayer 的實例(CALayer 的一個子類)

可以配置多個輸入和輸出芳室,由一個單獨的會話協(xié)調(diào)。如圖4-1所示:

Figure 4-1 A single session can configure multiple inputs and outputs

For many applications, this is as much detail as you need. For some operations, however, (if you want to monitor the power levels in an audio channel, for example) you need to consider how the various ports of an input device are represented and how those ports are connected to the output.

對于大多數(shù)程序刹勃,這有盡可能多的你需要知道的細(xì)節(jié)堪侯。然而對于某些操作(例如如果你想監(jiān)視音頻信道中的功率水平),需要考慮輸入設(shè)備的各種端口如何表示荔仁,以及這些端口是如何連接到輸出的伍宦。

A connection between a capture input and a capture output in a capture session is represented by an AVCaptureConnection object. Capture inputs (instances of AVCaptureInput) have one or more input ports (instances of AVCaptureInputPort). Capture outputs (instances of AVCaptureOutput) can accept data from one or more sources (for example, an AVCaptureMovieFileOutput object accepts both video and audio data).

捕獲輸入和捕獲輸出的會話之間的連接表現(xiàn)為 AVCaptureConnection 對象。捕獲輸入(AVCaptureInput的實例)有一個或多個輸入端口(AVCaptureInputPort的實例)乏梁。捕獲輸出(AVCaptureOutput的實例)可以從一個或多個資源接受數(shù)據(jù)(例如次洼,AVCaptureMovieFileOutput 對象接受音頻和視頻數(shù)據(jù))。

When you add an input or an output to a session, the session forms connections between all the compatible capture inputs’ ports and capture outputs, as shown in Figure 4-2. A connection between a capture input and a capture output is represented by an AVCaptureConnection object.

當(dāng)給會話添加一個輸入或者一個輸出時遇骑,會話構(gòu)成了所有可兼容的捕獲輸入端口和捕獲輸出端口的連接卖毁,如圖4-2所示。捕獲輸入與捕獲輸出之間的連接是由 AVCaptureConnection 對象表示落萎。

Figure 4-2 AVCaptureConnection represents a connection between an input and output

You can use a capture connection to enable or disable the flow of data from a given input or to a given output. You can also use a connection to monitor the average and peak power levels in an audio channel.

可以使用捕獲連接來啟用或者禁用給定輸入或給定輸出的數(shù)據(jù)流亥啦。也可以使用連接來監(jiān)視音頻信道中的平均和峰值功率水平。

Note: Media capture does not support simultaneous capture of both the front-facing and back-facing cameras on iOS devices.

注意:媒體捕獲不支持iOS設(shè)備上的前置攝像頭和后置攝像頭的同時捕捉练链。

Use a Capture Session to Coordinate Data Flow - 使用捕捉會話來協(xié)調(diào)數(shù)據(jù)流

An AVCaptureSession object is the central coordinating object you use to manage data capture. You use an instance to coordinate the flow of data from AV input devices to outputs. You add the capture devices and outputs you want to the session, then start data flow by sending the session a startRunning message, and stop the data flow by sending a stopRunning message.

AVCaptureSession 對象是你用來管理數(shù)據(jù)捕獲的中央?yún)f(xié)調(diào)對象翔脱。使用一個實例來協(xié)調(diào)從 AV 輸入設(shè)備到輸出的數(shù)據(jù)流。添加捕獲設(shè)備并且輸出你想要的會話媒鼓,然后發(fā)送一個 startRunning 消息啟動數(shù)據(jù)流届吁,發(fā)送 stopRunning 消息來停止數(shù)據(jù)流。

AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Add inputs and outputs.
//添加輸入和輸出绿鸣。
[session startRunning];
  • Configuring a Session - 配置會話

You use a preset on the session to specify the image quality and resolution you want. A preset is a constant that identifies one of a number of possible configurations; in some cases the actual configuration is device-specific:

使用會話上的 preset 來指定圖像的質(zhì)量和分辨率疚沐。預(yù)設(shè)是一個常數(shù),確定了一部分可能的配置中的一個枚驻;在某些情況下濒旦,設(shè)計的配置是設(shè)備特有的:

| Symbol | Resolution | Comments |
| AVCaptureSessionPresetHigh | High | Highest recording quality.This varies per device.|
| AVCaptureSessionPresetMedium | Medium | Suitable for Wi-Fi sharing.The actual values may change.|
| AVCaptureSessionPresetLow | Low | Suitable for 3G sharing.The actual values may change. |
| AVCaptureSessionPreset640x480 | 640x480 | VGA |
| AVCaptureSessionPreset1280x720 | 1280x720 | 720p HD. |
| AVCaptureSessionPresetPhoto | Photo | Full photo resolution.This is not supported for video output. |

If you want to set a media frame size-specific configuration, you should check whether it is supported before setting it, as follows:

如果要設(shè)置媒體幀特定大小的配置,應(yīng)該在設(shè)置之前檢查是否支持被設(shè)定再登,如下所示:

if ([session canSetSessionPreset:AVCaptureSessionPreset1280x720]) {
    session.sessionPreset = AVCaptureSessionPreset1280x720;
}
else {
    // Handle the failure.
}

If you need to adjust session parameters at a more granular level than is possible with a preset, or you’d like to make changes to a running session, you surround your changes with the beginConfiguration and commitConfiguration methods. The beginConfiguration and commitConfiguration methods ensure that devices changes occur as a group, minimizing visibility or inconsistency of state. After calling beginConfiguration, you can add or remove outputs, alter the sessionPreset property, or configure individual capture input or output properties. No changes are actually made until you invoke commitConfiguration, at which time they are applied together.

如果需要比預(yù)設(shè)情況尔邓,更加精細(xì)的水平調(diào)整會話參數(shù)晾剖,或者想給一個正在運行的會話做些改變,用 beginConfigurationcommitConfiguration 方法梯嗽。beginConfigurationcommitConfiguration 方法確保設(shè)備作為一個群體在變化齿尽,降低狀態(tài)的清晰度或者不協(xié)調(diào)性。調(diào)用 beginConfiguration 之后灯节,可以添加或者移除輸出循头,改變 sessionPreset 屬性,或者單獨配置捕獲輸入或輸出屬性炎疆。在你調(diào)用 commitConfiguration 之前實際上是沒有變化的卡骂,調(diào)用的時候它們才被應(yīng)用到一起。

[session beginConfiguration];
// Remove an existing capture device.
// Add a new capture device.
// Reset the preset.
//刪除現(xiàn)有的捕捉設(shè)備形入。
//添加一個新的捕獲設(shè)備全跨。
//重置預(yù)設(shè)。
[session commitConfiguration];
  • Monitoring Capture Session State - 監(jiān)視捕獲會話狀態(tài)

A capture session posts notifications that you can observe to be notified, for example, when it starts or stops running, or when it is interrupted. You can register to receive an AVCaptureSessionRuntimeErrorNotification if a runtime error occurs. You can also interrogate the session’s running property to find out if it is running, and its interrupted property to find out if it is interrupted. Additionally, both the running and interrupted properties are key-value observing compliant and the notifications are posted on the main thread.

捕獲會話發(fā)出你能觀察并被通知到的 notifications亿遂,例如浓若,當(dāng)它開始或者停止運行,或者當(dāng)它被中斷蛇数。你可以注冊荠瘪,如果發(fā)生了運行階段的錯誤啸箫,可以接收 AVCaptureSessionRuntimeErrorNotification 蔑歌。也可以詢問會話的 running 屬性去發(fā)現(xiàn)它正在運行的狀態(tài)均芽,并且它的 interrupted 屬性可以找到它是否被中斷了。此外挽放, runninginterrupted 屬性是遵從key-value observing 绍赛,并且在通知都是在主線程上發(fā)布的。

An AVCaptureDevice Object Represents an Input Device - 一個 AVCaptureDevice 對象代表一個輸入設(shè)備

An AVCaptureDevice object abstracts a physical capture device that provides input data (such as audio or video) to an AVCaptureSession object. There is one object for each input device, for example, two video inputs—one for the front-facing the camera, one for the back-facing camera—and one audio input for the microphone.

一個 AVCaptureDevice 對象抽象出物理捕獲設(shè)備辑畦,提供了輸入數(shù)據(jù)(比如音頻或者視頻)給 AVCaptureSession 對象吗蚌。例如每個輸入設(shè)備都有一個對象,兩個視頻輸入纯出,一個用于前置攝像頭蚯妇,一個用于后置攝像頭,一個用于麥克風(fēng)的音頻輸入暂筝。

You can find out which capture devices are currently available using the AVCaptureDevice class methods devices and devicesWithMediaType:. And, if necessary, you can find out what features an iPhone, iPad, or iPod offers (see Device Capture Settings). The list of available devices may change, though. Current input devices may become unavailable (if they’re used by another application), and new input devices may become available, (if they’re relinquished by another application). You should register to receive AVCaptureDeviceWasConnectedNotification and AVCaptureDeviceWasDisconnectedNotification notifications to be alerted when the list of available devices changes.

使用 AVCaptureDevice 類方法 devicesdevicesWithMediaType: 可以找出哪一個捕獲設(shè)備當(dāng)前是可用的箩言。而且如果有必要,可以找出 iPhone焕襟,iPad 或者 iPod 提供了什么功能(詳情看:Device Capture Settings)陨收。雖然可用設(shè)備的列表可能會改變。當(dāng)前輸入設(shè)備可能會變得不可用(如果他們被另一個應(yīng)用程序使用),新的輸入設(shè)備可能成為可用的务漩,(如果他們被另一個應(yīng)用程序讓出)拄衰。應(yīng)該注冊,當(dāng)可用設(shè)備列表改變時接收 AVCaptureDeviceWasConnectedNotificationAVCaptureDeviceWasDisconnectedNotification 通知饵骨。

You add an input device to a capture session using a capture input (see Use Capture Inputs to Add a Capture Device to a Session).

使用捕獲輸入將輸入設(shè)備添加到捕獲會話中(詳情請看:Use Capture Inputs to Add a Capture Device to a Session

  • Device Characteristics - 設(shè)備特點

You can ask a device about its different characteristics. You can also test whether it provides a particular media type or supports a given capture session preset using hasMediaType: and supportsAVCaptureSessionPreset: respectively. To provide information to the user, you can find out the position of the capture device (whether it is on the front or the back of the unit being tested), and its localized name. This may be useful if you want to present a list of capture devices to allow the user to choose one.

你可以問一個有關(guān)設(shè)備的不同特征翘悉。你也可以使用 hasMediaType: 測試它是否提供了一個特定的媒體類型,或者使用 supportsAVCaptureSessionPreset: 支持一個給定捕捉會話的預(yù)設(shè)狀態(tài)居触。為了給用戶提供信息妖混,可以找到捕捉設(shè)備的位置(無論它是在正被測試單元的前面還是后面),以及本地化名稱轮洋。這是很有用的制市,如果你想提出一個捕獲設(shè)備的列表,讓用戶選擇一個砖瞧。

Figure 4-3 shows the positions of the back-facing (AVCaptureDevicePositionBack) and front-facing (AVCaptureDevicePositionFront) cameras.

圖4-3顯示了后置攝像頭(AVCaptureDevicePositionBack)和前置攝像頭(AVCaptureDevicePositionFront)的位置息堂。

Note: Media capture does not support simultaneous capture of both the front-facing and back-facing cameras on iOS devices.

注意:媒體捕獲在iOS設(shè)備上不支持前置攝像頭和后置攝像頭同時捕捉。

Figure 4-3 iOS device front and back facing camera positions

The following code example iterates over all the available devices and logs their name—and for video devices, their position—on the unit.

下面的代碼示例遍歷了所有可用的設(shè)備并且記錄了它們的名字块促,視頻設(shè)備,在裝置上的位置床未。

NSArray *devices = [AVCaptureDevice devices];
 
for (AVCaptureDevice *device in devices) {
 
    NSLog(@"Device name: %@", [device localizedName]);
 
    if ([device hasMediaType:AVMediaTypeVideo]) {
 
        if ([device position] == AVCaptureDevicePositionBack) {
            NSLog(@"Device position : back");
        }
        else {
            NSLog(@"Device position : front");
        }
    }
}

In addition, you can find out the device’s model ID and its unique ID.

此外竭翠,你可以找到該設(shè)備的 model ID 和它的 unique ID

  • Device Capture Settings 設(shè)備捕獲設(shè)置

Different devices have different capabilities; for example, some may support different focus or flash modes; some may support focus on a point of interest.

例如不同設(shè)備有不同的功能薇搁,一些可能支持不同的聚焦或者閃光模式斋扰;一些可能會支持聚焦在一個興趣點。

The following code fragment shows how you can find video input devices that have a torch mode and support a given capture session preset:

下面的代碼片段展示了如何找到有一個 torch 模式的視頻輸入設(shè)備啃洋,并且支持一個捕捉會話預(yù)設(shè)传货。

NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
NSMutableArray *torchDevices = [[NSMutableArray alloc] init];
 
for (AVCaptureDevice *device in devices) {
    [if ([device hasTorch] &&
         [device supportsAVCaptureSessionPreset:AVCaptureSessionPreset640x480]) {
        [torchDevices addObject:device];
    }
}

If you find multiple devices that meet your criteria, you might let the user choose which one they want to use. To display a description of a device to the user, you can use its localizedName property.

如果找到多個設(shè)備滿足標(biāo)準(zhǔn),你可能會讓用戶選擇一個他們想使用的宏娄。給用戶顯示一個設(shè)備的描述问裕,可以使用它的 localizedName 屬性。

You use the various different features in similar ways. There are constants to specify a particular mode, and you can ask a device whether it supports a particular mode. In several cases, you can observe a property to be notified when a feature is changing. In all cases, you should lock the device before changing the mode of a particular feature, as described in Configuring a Device.

用類似的方法使用各種不同的功能孵坚。有常量來指定一個特定的模式粮宛,也可以問設(shè)備是否支持特定的模式。在一些情況下卖宠,當(dāng)功能改變的時候可以觀察到要通知的屬性巍杈。在所有情況下,你應(yīng)該改變特定功能的模式之前鎖定設(shè)備扛伍,如在設(shè)備配置中描述筷畦。

Note: Focus point of interest and exposure point of interest are mutually exclusive, as are focus mode and exposure mode.

注意:興趣的焦點和興趣的曝光點是相互排斥的,因為是聚焦模式和曝光模式刺洒。

  • Focus Modes - 聚焦模式

There are three focus modes:

  • AVCaptureFocusModeLocked: The focal position is fixed.
    This is useful when you want to allow the user to compose a scene then lock the focus.
  • AVCaptureFocusModeAutoFocus: The camera does a single scan focus then reverts to locked.
    This is suitable for a situation where you want to select a particular item on which to focus and then maintain focus on that item even if it is not the center of the scene.
  • AVCaptureFocusModeContinuousAutoFocus: The camera continuously autofocuses as needed.

有3個聚焦模式:

  • AVCaptureFocusModeLocked :焦點的位置是固定的鳖宾。
    這是很有用的亚斋,當(dāng)你想讓用戶組成一個場景,然后鎖定焦點攘滩。
  • AVCaptureFocusModeAutoFocus :照相機(jī)做一次掃描聚焦帅刊,然后將焦點鎖定。
    這適合于漂问,你想要選擇一個特定的項目赖瞒,即使它不是現(xiàn)場的中心,但可以專注于該項目的焦點蚤假。
  • AVCaptureFocusModeContinuousAutoFocus 相機(jī)需要不斷的自動對焦栏饮。

You use the isFocusModeSupported: method to determine whether a device supports a given focus mode, then set the mode using the focusMode property.

使用 isFocusModeSupported: 方法來決定設(shè)備是否支持給定的聚焦模式,然后使用 focusMode 屬性設(shè)置模式磷仰。

In addition, a device may support a focus point of interest. You test for support using focusPointOfInterestSupported. If it’s supported, you set the focal point using focusPointOfInterest. You pass a CGPoint where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right in landscape mode with the home button on the right—this applies even if the device is in portrait mode.

此外袍嬉,設(shè)備可能支持一個興趣焦點。使用 focusPointOfInterestSupported 進(jìn)行支持測試灶平。如果支持伺通,使用 focusPointOfInterest 設(shè)置焦點。傳一個 CGPoing逢享,橫向模式下(就是 home 鍵在右邊)圖片的左上角是 {0, 0}罐监,右下角是 {1, 1}, – 即使設(shè)備是縱向模式也適用瞒爬。

You can use the adjustingFocus property to determine whether a device is currently focusing. You can observe the property using key-value observing to be notified when a device starts and stops focusing.

你可以使用 adjustingFocus 屬性來確定設(shè)備是否正在聚焦弓柱。當(dāng)設(shè)備開始、停止聚焦時可以使用 key-value observing 觀察侧但,接收通知矢空。

If you change the focus mode settings, you can return them to the default configuration as follows:

如果改變聚焦模式設(shè)置,可以將其返回到默認(rèn)配置禀横,如下所示:

if ([currentDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]) {
    CGPoint autofocusPoint = CGPointMake(0.5f, 0.5f);
    [currentDevice setFocusPointOfInterest:autofocusPoint];
    [currentDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
}
  • Exposure Modes - 曝光模式

There are two exposure modes:

  • AVCaptureExposureModeContinuousAutoExposure: The device automatically adjusts the exposure level as needed.
  • AVCaptureExposureModeLocked: The exposure level is fixed at its current level.

You use the isExposureModeSupported: method to determine whether a device supports a given exposure mode, then set the mode using the exposureMode property.

有兩種曝光模式:

使用 isExposureModeSupported: 方法來確定設(shè)備是否支持給定的曝光模式燕侠,然后使用 exposureMode 屬性設(shè)置模式者祖。

In addition, a device may support an exposure point of interest. You test for support using exposurePointOfInterestSupported. If it’s supported, you set the exposure point using exposurePointOfInterest. You pass a CGPoint where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right in landscape mode with the home button on the right—this applies even if the device is in portrait mode.

此外,一個設(shè)備支持一個曝光點绢彤。使用 exposurePointOfInterestSupported 測試支持七问。如果支持,使用 exposurePointOfInterest 設(shè)置曝光點茫舶。傳一個 CGPoing械巡,橫向模式下(就是 home 鍵在右邊)圖片的左上角是 {0, 0},右下角是 {1, 1}, – 即使設(shè)備是縱向模式也適用讥耗。

You can use the adjustingExposure property to determine whether a device is currently changing its exposure setting. You can observe the property using key-value observing to be notified when a device starts and stops changing its exposure setting.

可以使用 adjustingExposure 屬性來確定設(shè)備當(dāng)前是否改變它的聚焦設(shè)置有勾。當(dāng)設(shè)備開始、停止聚焦時可以使用 key-value observing 觀察古程,接收通知蔼卡。

If you change the exposure settings, you can return them to the default configuration as follows:

如果改變曝光設(shè)置,可以將其返回到默認(rèn)配置挣磨,如下所示:

if ([currentDevice isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]) {
    CGPoint exposurePoint = CGPointMake(0.5f, 0.5f);
    [currentDevice setExposurePointOfInterest:exposurePoint];
    [currentDevice setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
}
  • Flash Modes - 閃光模式

There are three flash modes:

  • AVCaptureFlashModeOff: The flash will never fire.
  • AVCaptureFlashModeOn: The flash will always fire.
  • AVCaptureFlashModeAuto: The flash will fire dependent on the ambient light conditions.

You use hasFlash to determine whether a device has a flash. If that method returns YES, you then use the isFlashModeSupported: method, passing the desired mode to determine whether a device supports a given flash mode, then set the mode using the flashMode property.

有3種閃光模式:

使用 hasFlash 來確定設(shè)備是否有閃光燈塘砸。如果這個方法返回 YES ,然后使用 isFlashModeSupported: 方法確定設(shè)備是否支持給定的閃光模式晤锥,然后使用 flashMode 屬性設(shè)置模式掉蔬。

  • Torch Mode - 手電筒模式

In torch mode, the flash is continuously enabled at a low power to illuminate a video capture. There are three torch modes:

  • AVCaptureTorchModeOff: The torch is always off.
  • AVCaptureTorchModeOn: The torch is always on.
  • AVCaptureTorchModeAuto: The torch is automatically switched on and off as needed.

You use hasTorch to determine whether a device has a flash. You use the isTorchModeSupported: method to determine whether a device supports a given flash mode, then set the mode using the torchMode property.

For devices with a torch, the torch only turns on if the device is associated with a running capture session.

在手電筒模式下,閃光燈在一個低功率下一直開啟矾瘾,以照亮對視頻捕獲女轿。有3個手電筒模式:

使用 hasTorch 來確定設(shè)備是否有閃光燈。使用 isTorchModeSupported: 方法來確定設(shè)備是否支持給定的閃光模式戈泼,然后使用 torchMode 屬性來設(shè)置模式。

對于一個有手電筒的設(shè)備赏僧,只有當(dāng)該設(shè)備與一個運行時捕捉會話關(guān)聯(lián)時大猛,才能打開手電筒。

  • Video Stabilization - 視頻穩(wěn)定性

Cinematic video stabilization is available for connections that operate on video, depending on the specific device hardware. Even so, not all source formats and video resolutions are supported.

Enabling cinematic video stabilization may also introduce additional latency into the video capture pipeline. To detect when video stabilization is in use, use the videoStabilizationEnabled property. The enablesVideoStabilizationWhenAvailable property allows an application to automatically enable video stabilization if it is supported by the camera. By default automatic stabilization is disabled due to the above limitations.

電影視頻的穩(wěn)定化可用于連接視頻上的操作淀零,這取決于具體的硬件挽绩。盡管如此,不是所有的源格式和視頻分辨率都被支持驾中。

使用電影視頻穩(wěn)定化也可能會對視頻采集管道引起額外的延遲。正在使用視頻穩(wěn)定化時肩民,使用 videoStabilizationEnabled 屬性可以檢測。enablesVideoStabilizationWhenAvailable 屬性允許應(yīng)用程序自動使視頻穩(wěn)定化可用,如果它是被攝像頭支持的話磅叛。由于以上限制,默認(rèn)自動穩(wěn)定化是禁用的化焕。

  • White Balance - 白平衡

There are two white balance modes:

  • AVCaptureWhiteBalanceModeLocked: The white balance mode is fixed.
  • AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance: The camera continuously adjusts the white balance as needed.

You use the isWhiteBalanceModeSupported: method to determine whether a device supports a given white balance mode, then set the mode using the whiteBalanceMode property.

You can use the adjustingWhiteBalance property to determine whether a device is currently changing its white balance setting. You can observe the property using key-value observing to be notified when a device starts and stops changing its white balance setting.

有兩個白平衡模式:

使用 isWhiteBalanceModeSupported: :方法來確定設(shè)備是否支持給定的白平衡模式撒桨,然后使用 whiteBalanceMode 屬性設(shè)置模式查刻。

可以使用 adjustingWhiteBalance 屬性來確定設(shè)備是否正在改變白平衡設(shè)置。當(dāng)設(shè)備開始或者停止改變它的白平衡設(shè)置時凤类,可以使用 key-value observing 觀察屬性穗泵,接收通知。

  • Setting Device Orientation - 設(shè)置設(shè)備方向

You set the desired orientation on a AVCaptureConnection to specify how you want the images oriented in the AVCaptureOutput (AVCaptureMovieFileOutput, AVCaptureStillImageOutput and AVCaptureVideoDataOutput) for the connection.

Use the AVCaptureConnectionsupportsVideoOrientation property to determine whether the device supports changing the orientation of the video, and the videoOrientation property to specify how you want the images oriented in the output port. Listing 4-1 shows how to set the orientation for a AVCaptureConnection to AVCaptureVideoOrientationLandscapeLeft:

AVCaptureConnection 設(shè)置期望的方向踱蠢,來指定你想要的圖像在 AVCaptureOutputAVCaptureMovieFileOutput火欧, AVCaptureStillImageOutput, AVCaptureVideoDataOutput)中的方向棋电,為了連接。

使用 AVCaptureConnectionsupportsVideoOrientation 屬性來確定設(shè)備是否支持改變視頻的方向苇侵,videoOrientation 屬性指定你想要的圖像在輸出端口的方向赶盔。列表4-1顯示了如何設(shè)置方向,為 AVCaptureConnection 設(shè)置 AVCaptureVideoOrientationLandscapeLeft 榆浓。

Listing 4-1 Setting the orientation of a capture connection

AVCaptureConnection *captureConnection = <#A capture connection#>;
if ([captureConnection isVideoOrientationSupported])
{
    AVCaptureVideoOrientation orientation = AVCaptureVideoOrientationLandscapeLeft;
    [captureConnection setVideoOrientation:orientation];
}
  • Configuring a Device - 配置設(shè)備

To set capture properties on a device, you must first acquire a lock on the device using lockForConfiguration:. This avoids making changes that may be incompatible with settings in other applications. The following code fragment illustrates how to approach changing the focus mode on a device by first determining whether the mode is supported, then attempting to lock the device for reconfiguration. The focus mode is changed only if the lock is obtained, and the lock is released immediately afterward.

在設(shè)備上設(shè)置捕獲屬性于未,必須先使用 lockForConfiguration: 獲得設(shè)備鎖。這樣就避免了在其他應(yīng)用程序中可能與設(shè)置不兼容的更改陡鹃。下面的代碼段演示了首先如何通過確定模式是否被支持的方式改變一個設(shè)備上的焦點模式烘浦,然后視圖鎖定設(shè)備重新配置。只有當(dāng)鎖被獲取到萍鲸,焦點模式才會被改變闷叉,并且鎖被釋放后立即鎖定。

if ([device isFocusModeSupported:AVCaptureFocusModeLocked]) {
    NSError *error = nil;
    if ([device lockForConfiguration:&error]) {
        device.focusMode = AVCaptureFocusModeLocked;
        [device unlockForConfiguration];
    }
    else {
        // Respond to the failure as appropriate.

You should hold the device lock only if you need the settable device properties to remain unchanged. Holding the device lock unnecessarily may degrade capture quality in other applications sharing the device.

只有在需要設(shè)置設(shè)備屬性保持不變的時候才應(yīng)該使設(shè)備鎖保持脊阴。沒必要的保持設(shè)備所握侧,可能會在其他應(yīng)用程序共享設(shè)備時降低捕獲質(zhì)量。

  • Switching Between Devices - 切換裝置

Sometimes you may want to allow users to switch between input devices—for example, switching from using the front-facing to to the back-facing camera. To avoid pauses or stuttering, you can reconfigure a session while it is running, however you should use beginConfiguration and commitConfiguration to bracket your configuration changes:

有時嘿期,你可能想允許用戶在輸入設(shè)備之間進(jìn)行切換品擎,比如使用前置攝像頭到后置攝像頭的切換。為了避免暫捅感欤或者卡頓萄传,可以在運行時配置一個會話,但是你應(yīng)該使用 beginConfigurationcommitConfiguration 支持你的配置改變:

AVCaptureSession *session = <#A capture session#>;
[session beginConfiguration];
 
[session removeInput:frontFacingCameraDeviceInput];
[session addInput:backFacingCameraDeviceInput];
 
[session commitConfiguration];

When the outermost commitConfiguration is invoked, all the changes are made together. This ensures a smooth transition.

當(dāng)最外面的 commitConfiguration 被調(diào)用蜜猾,所有的改變都是一起做的秀菱。這保證了平穩(wěn)過渡。

Use Capture Inputs to Add a Capture Device to a Session - 使用捕獲輸入將捕獲設(shè)備添加到會話中

To add a capture device to a capture session, you use an instance of AVCaptureDeviceInput (a concrete subclass of the abstract AVCaptureInput class). The capture device input manages the device’s ports.

添加一個捕獲裝置到捕獲會話中蹭睡,使用 AVCaptureDeviceInput (AVCaptureInput 抽象類的具體子類)的實例答朋。捕獲設(shè)備輸入管理設(shè)備的端口。

NSError *error;
AVCaptureDeviceInput *input =
        [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
    // Handle the error appropriately.
}

You add inputs to a session using addInput:. If appropriate, you can check whether a capture input is compatible with an existing session using canAddInput:.

使用 addInput: 給會話添加一個輸入棠笑。如果合適的話,可以使用 canAddInput: 檢查是否有輸入捕獲與現(xiàn)有會話是兼容的禽绪。

AVCaptureSession *captureSession = <#Get a capture session#>;
AVCaptureDeviceInput *captureDeviceInput = <#Get a capture device input#>;
if ([captureSession canAddInput:captureDeviceInput]) {
    [captureSession addInput:captureDeviceInput];
}
else {
    // Handle the failure.
}

See Configuring a Session for more details on how you might reconfigure a running session.

An AVCaptureInput vends one or more streams of media data. For example, input devices can provide both audio and video data. Each media stream provided by an input is represented by an AVCaptureInputPort object. A capture session uses an AVCaptureConnection object to define the mapping between a set of AVCaptureInputPort objects and a single AVCaptureOutput.

有關(guān)如果配置一個正在運行的會話蓖救,更多細(xì)節(jié)請查看 Configuring a Session .

AVCaptureInput 聲明一個或者多個媒體數(shù)據(jù)流。例如印屁,輸入設(shè)備可以提供音頻和視頻數(shù)據(jù)循捺。輸入提供的每個媒體流都被一個 AVCaptureInputPort 所表示。一個捕獲會話使用 AVCaptureConnection 對象來定義一個 一組 AVCaptureInputPort 對象和一個 AVCaptureOutput 之間的映射雄人。

Use Capture Outputs to Get Output from a Session - 使用捕獲輸出從會話得到輸出

To get output from a capture session, you add one or more outputs. An output is an instance of a concrete subclass of AVCaptureOutput. You use:

  • AVCaptureMovieFileOutput to output to a movie file
  • AVCaptureVideoDataOutput if you want to process frames from the video being captured, for example, - to create your own custom view layer
  • AVCaptureAudioDataOutput if you want to process the audio data being captured
  • AVCaptureStillImageOutput if you want to capture still images with accompanying metadata

You add outputs to a capture session using addOutput:. You check whether a capture output is compatible with an existing session using canAddOutput:. You can add and remove outputs as required while the session is running.

要從捕獲會話得到輸出从橘,可以添加一個或多個輸出念赶。一個輸出是 AVCaptureOutput 的具體子類的實例。下面幾種使用:

使用 addOutput: 把輸出添加到捕獲會話中。使用 canAddOutput: 檢查是否一個捕獲輸出與現(xiàn)有的會話是兼容的香府《裕可以在會話正在運行的時候添加和刪除所需的輸出。

AVCaptureSession *captureSession = <#Get a capture session#>;
AVCaptureMovieFileOutput *movieOutput = <#Create and configure a movie output#>;
if ([captureSession canAddOutput:movieOutput]) {
    [captureSession addOutput:movieOutput];
}
else {
    // Handle the failure.
}
  • Saving to a Movie File - 保存電影文件

You save movie data to a file using an AVCaptureMovieFileOutput object. (AVCaptureMovieFileOutput is a concrete subclass of AVCaptureFileOutput, which defines much of the basic behavior.) You can configure various aspects of the movie file output, such as the maximum duration of a recording, or its maximum file size. You can also prohibit recording if there is less than a given amount of disk space left.

使用 AVCaptureMovieFileOutput 對象保存電影數(shù)據(jù)到文件中企孩。(AVCaptureMovieFileOutputAVCaptureFileOutput 的具體子類锭碳,定義了大量的基本行為。)可以電影文件輸出的各個方面勿璃,如記錄的最大時間擒抛,或它的最大文件的大小。也可以禁止記錄蝗柔,如果有小于給定磁盤空間的數(shù)量闻葵。

AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;
NSURL *fileURL = <#A file URL that identifies the output location#>;
[aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>];

The resolution and bit rate for the output depend on the capture session’s sessionPreset. The video encoding is typically H.264 and audio encoding is typically AAC. The actual values vary by device.

輸出的分辨率和比特率取決于捕獲會話的 sessionPreset 。視頻編碼通常是 H.264 癣丧,音頻編碼通常是 AAC 槽畔。實際值因設(shè)備而異。

  • Starting a Recording - 開始記錄

You start recording a QuickTime movie using startRecordingToOutputFileURL:recordingDelegate:. You need to supply a file-based URL and a delegate. The URL must not identify an existing file, because the movie file output does not overwrite existing resources. You must also have permission to write to the specified location. The delegate must conform to the AVCaptureFileOutputRecordingDelegate protocol, and must implement the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method.

使用 startRecordingToOutputFileURL:recordingDelegate: 開始記錄一個 QuickTime 電影胁编。需要提供一個基于 URLdelegate 的文件厢钧。URL 決不能指向一個已經(jīng)存在的文件,因為電影文件輸出不會覆蓋存在的資源嬉橙。你還必須有權(quán)限能寫入指定的位置早直。 delegate 必須符合 AVCaptureFileOutputRecordingDelegate 協(xié)議,并且必須實現(xiàn) captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: 方法市框。

AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output> ;
NSURL *fileURL = <#A file URL that identifies the output location#>;
[aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>];

In the implementation of captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:, the delegate might write the resulting movie to the Camera Roll album. It should also check for any errors that might have occurred.

captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: 的實現(xiàn)中霞扬,代理可以將結(jié)果電影寫入到相機(jī)膠卷專輯中。它也應(yīng)該可能發(fā)生的任何錯誤枫振。

  • Ensuring That the File Was Written Successfully - 確保文件被成功寫入

To determine whether the file was saved successfully, in the implementation of captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: you check not only the error but also the value of the AVErrorRecordingSuccessfullyFinishedKey in the error’s user info dictionary:

為了確定文件是否成功被寫入喻圃,在 captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: 的實現(xiàn)中,不僅要檢查錯誤粪滤,還要在錯誤的用戶信息字典中斧拍,檢查 AVErrorRecordingSuccessfullyFinishedKey 的值。

- (void)captureOutput:(AVCaptureFileOutput *)captureOutput
        didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
        fromConnections:(NSArray *)connections
        error:(NSError *)error {
 
    BOOL recordedSuccessfully = YES;
    if ([error code] != noErr) {
        // A problem occurred: Find out if the recording was successful.
        id value = [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey];
        if (value) {
            recordedSuccessfully = [value boolValue];
        }
    }
    // Continue as appropriate...
}

You should check the value of the AVErrorRecordingSuccessfullyFinishedKeykey in the user info dictionary of the error, because the file might have been saved successfully, even though you got an error. The error might indicate that one of your recording constraints was reached—for example, AVErrorMaximumDurationReached or AVErrorMaximumFileSizeReached. Other reasons the recording might stop are:

The disk is full—AVErrorDiskFull
The recording device was disconnected—AVErrorDeviceWasDisconnected
The session was interrupted (for example, a phone call was received)—AVErrorSessionWasInterrupted

應(yīng)該在用戶的錯誤信息字典中檢查 AVErrorRecordingSuccessfullyFinishedKeykey 的值杖小,因為即使得到了一個錯誤信息肆汹,文件可能已經(jīng)被成功保存了愚墓。這種錯誤可能表明你的一個記錄約束被延遲了,例如 AVErrorMaximumDurationReached 或者 AVErrorMaximumFileSizeReached 昂勉。記錄可能停止的其他原因是:

  • Adding Metadata to a File - 將元數(shù)據(jù)添加到文件中

You can set metadata for the movie file at any time, even while recording. This is useful for situations where the information is not available when the recording starts, as may be the case with location information. Metadata for a file output is represented by an array of AVMetadataItem objects; you use an instance of its mutable subclass, AVMutableMetadataItem, to create metadata of your own.

可以在任何時間設(shè)置電影文件的元數(shù)據(jù),即使在記錄的時候硼啤。這是有用的议经,當(dāng)記錄開始,信息室不可用的谴返,因為可能是位置信息的情況下煞肾。一個輸出文件的元數(shù)據(jù)是由 AVMetadataItem 對象的數(shù)組表示;使用其可變子類(AVMutableMetadataItem)的實例嗓袱,去創(chuàng)建屬于你自己的元數(shù)據(jù)籍救。

AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;
NSArray *existingMetadataArray = aMovieFileOutput.metadata;
NSMutableArray *newMetadataArray = nil;
if (existingMetadataArray) {
    newMetadataArray = [existingMetadataArray mutableCopy];
}
else {
    newMetadataArray = [[NSMutableArray alloc] init];
}
 
AVMutableMetadataItem *item = [[AVMutableMetadataItem alloc] init];
item.keySpace = AVMetadataKeySpaceCommon;
item.key = AVMetadataCommonKeyLocation;
 
CLLocation *location - <#The location to set#>;
item.value = [NSString stringWithFormat:@"%+08.4lf%+09.4lf/"
    location.coordinate.latitude, location.coordinate.longitude];
 
[newMetadataArray addObject:item];
 
aMovieFileOutput.metadata = newMetadataArray;
  • Processing Frames of Video - 處理視頻的幀

An AVCaptureVideoDataOutput object uses delegation to vend video frames. You set the delegate using setSampleBufferDelegate:queue:. In addition to setting the delegate, you specify a serial queue on which they delegate methods are invoked. You must use a serial queue to ensure that frames are delivered to the delegate in the proper order. You can use the queue to modify the priority given to delivering and processing the video frames. See SquareCam for a sample implementation.

一個 AVCaptureVideoDataOutput 對象使用委托來聲明視頻幀。使用 setSampleBufferDelegate:queue: 設(shè)置代理渠抹。除了設(shè)置代理蝙昙,還要制定一個調(diào)用它們代理方法的串行隊列。必須使用一個串行隊列以確保幀以適當(dāng)?shù)捻樞騻鬟f給代理梧却∑娴撸可以使用隊列來修改給定傳輸?shù)膬?yōu)先級和處理視頻幀的優(yōu)先級。查看 SquareCam 有一個簡單的實現(xiàn)放航。

The frames are presented in the delegate method, captureOutput:didOutputSampleBuffer:fromConnection:, as instances of the CMSampleBufferRef opaque type (see Representations of Media). By default, the buffers are emitted in the camera’s most efficient format. You can use the videoSettings property to specify a custom output format. The video settings property is a dictionary; currently, the only supported key is kCVPixelBufferPixelFormatTypeKey. The recommended pixel formats are returned by the availableVideoCVPixelFormatTypes property , and the availableVideoCodecTypes property returns the supported values. Both Core Graphics and OpenGL work well with the BGRA format:

在代理方法中(captureOutput:didOutputSampleBuffer:fromConnection:烈拒,CMSampleBufferRef 不透明類型的實例,詳情見 Representations of Media)广鳍,幀是被露出來的荆几。默認(rèn)情況下,被放出的緩沖區(qū)是相機(jī)最有效的格式赊时∠边叮可以使用 videoSettings 屬性指定自定義輸出格式释移。視頻設(shè)置屬性是一個字典蛛碌;目前田轧,唯一支持的 keykCVPixelBufferPixelFormatTypeKey。推薦的像素格式是由 availableVideoCVPixelFormatTypes 屬性返回的竭缝,并且 availableVideoCodecTypes 屬性返回支持的值狐胎。Core GraphicsOpenGL 都很好的使用 BGRA 格式:

AVCaptureVideoDataOutput *videoDataOutput = [AVCaptureVideoDataOutput new];
NSDictionary *newSettings =
                @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };
videoDataOutput.videoSettings = newSettings;
 
 // discard if the data output queue is blocked (as we process the still image)
//如果數(shù)據(jù)輸出隊列被阻塞(當(dāng)我們處理靜態(tài)映像時),則丟棄它
[videoDataOutput setAlwaysDiscardsLateVideoFrames:YES];)
 
// create a serial dispatch queue used for the sample buffer delegate as well as when a still image is captured
//創(chuàng)建一個用于 樣本緩沖區(qū)委托以及捕獲靜態(tài)圖像時的串行調(diào)度隊列
// a serial dispatch queue must be used to guarantee that video frames will be delivered in order
//必須使用串行調(diào)度隊列來保證視頻幀將按順序傳送
// see the header doc for setSampleBufferDelegate:queue: for more information
//有關(guān)更多信息歌馍,請參見setSampleBufferDelegate:queue的頭文檔
videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
 
AVCaptureSession *captureSession = <#The Capture Session#>;
 
if ( [captureSession canAddOutput:videoDataOutput] )
     [captureSession addOutput:videoDataOutput];
  • Performance Considerations for Processing Video - 處理視頻的性能考慮

You should set the session output to the lowest practical resolution for your application. Setting the output to a higher resolution than necessary wastes processing cycles and needlessly consumes power.

應(yīng)該將會話輸出設(shè)置為應(yīng)用程序的最低分辨率。設(shè)置輸出超過必要廢物處理周期晕鹊,達(dá)到更高的分辨率松却,從而不必要消耗功率暴浦。

You must ensure that your implementation of captureOutput:didOutputSampleBuffer:fromConnection: is able to process a sample buffer within the amount of time allotted to a frame. If it takes too long and you hold onto the video frames, AV Foundation stops delivering frames, not only to your delegate but also to other outputs such as a preview layer.

必須確保 captureOutput:didOutputSampleBuffer:fromConnection: 的實現(xiàn),能夠處理大量時間內(nèi)的樣品緩沖晓锻,分配到一個幀中歌焦。如果它需要很久,你要一直抓住視頻幀砚哆,AV Foundation 會停止給独撇,你的代理,還有其他輸出例如 preview layer 躁锁,提供幀纷铣。

You can use the capture video data output’s minFrameDuration property to be sure you have enough time to process a frame — at the cost of having a lower frame rate than would otherwise be the case. You might also make sure that the alwaysDiscardsLateVideoFrames property is set to YES (the default). This ensures that any late video frames are dropped rather than handed to you for processing. Alternatively, if you are recording and it doesn’t matter if the output fames are a little late and you would prefer to get all of them, you can set the property value to NO. This does not mean that frames will not be dropped (that is, frames may still be dropped), but that they may not be dropped as early, or as efficiently.

可以使用捕獲視頻數(shù)據(jù)輸出的 minFrameDuration 屬性來確保你有足夠時間來處理幀 – 在具有較低的幀速率比其他情況下的成本。也可以確保 alwaysDiscardsLateVideoFrames 屬性被設(shè)為 YES (默認(rèn))战转。這確保任何后期視頻的幀都被丟棄搜立,而不是交給你處理』毖恚或者啄踊,如果你是記錄,更想得到它們?nèi)康蟊辏唤橐廨敵鰩晕⑼硪稽c的話颠通,可以設(shè)置該屬性的值為 NO 。這并不意味著不會丟失幀(即膀懈,幀仍有可能丟失)顿锰,但它們不可能像之前那樣減少,或者說是有點效果的吏砂。

  • Capturing Still Images - 捕獲靜止圖像

You use an AVCaptureStillImageOutput output if you want to capture still images with accompanying metadata. The resolution of the image depends on the preset for the session, as well as the device.

如果你想捕獲帶著元數(shù)據(jù)的靜止圖像撵儿,可以使用 AVCaptureStillImageOutput 輸出。圖像的分辨率取決于會話的預(yù)設(shè)狐血,以及設(shè)備的設(shè)置淀歇。

  • Pixel and Encoding Formats - 像素和編碼格式

Different devices support different image formats. You can find out what pixel and codec types are supported by a device using availableImageDataCVPixelFormatTypes and availableImageDataCodecTypes respectively. Each method returns an array of the supported values for the specific device. You set the outputSettings dictionary to specify the image format you want, for example:

不同的設(shè)備支持不同的圖像格式。使用 availableImageDataCVPixelFormatTypes 可以找到什么樣的像素被支持匈织,使用 availableImageDataCodecTypes 可以找到什么樣的編解碼器類型被支持浪默。每一種方法都返回一個特定設(shè)備的支持的值的數(shù)組。設(shè)置 outputSettings 字典來指定你想要的圖像格式缀匕,例如:

AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = @{ AVVideoCodecKey : AVVideoCodecJPEG};
[stillImageOutput setOutputSettings:outputSettings];

If you want to capture a JPEG image, you should typically not specify your own compression format. Instead, you should let the still image output do the compression for you, since its compression is hardware-accelerated. If you need a data representation of the image, you can use jpegStillImageNSDataRepresentation: to get an NSData object without recompressing the data, even if you modify the image’s metadata.

如果你想捕獲一個 JPEG 圖像纳决,通常應(yīng)該不要指定自己的壓縮格式。相反乡小,應(yīng)該讓靜態(tài)圖像輸出為你做壓縮阔加,因為它的壓縮是硬件加速的。如果你需要圖像的表示數(shù)據(jù)满钟,可以使用 jpegStillImageNSDataRepresentation: 得到未壓縮數(shù)據(jù)的NSDate 對象胜榔,即使你修改修改圖像的元數(shù)據(jù)胳喷。

  • Capturing an Image - 捕獲圖像

When you want to capture an image, you send the output a captureStillImageAsynchronouslyFromConnection:completionHandler: message. The first argument is the connection you want to use for the capture. You need to look for the connection whose input port is collecting video:

當(dāng)你想捕獲圖像,給輸出發(fā)送一個 captureStillImageAsynchronouslyFromConnection:completionHandler: 消息夭织。第一個參數(shù)是用于想要捕獲使用的連接吭露。你需要尋找輸入端口是收集視頻的連接。

AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections) {
    for (AVCaptureInputPort *port in [connection inputPorts]) {
        if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
            videoConnection = connection;
            break;
        }
    }
    if (videoConnection) { break; }
}

The second argument to captureStillImageAsynchronouslyFromConnection:completionHandler: is a block that takes two arguments: a CMSampleBuffer opaque type containing the image data, and an error. The sample buffer itself may contain metadata, such as an EXIF dictionary, as an attachment. You can modify the attachments if you want, but note the optimization for JPEG images discussed in Pixel and Encoding Formats.

captureStillImageAsynchronouslyFromConnection:completionHandler: 的第二個參數(shù)是一個 block 尊惰,block 有兩個參數(shù):一個包含圖像數(shù)據(jù)的 CMSampleBuffer 不透明類型讲竿,一個 error。樣品緩沖自身可能包含元數(shù)據(jù)弄屡,例如 EXIF 字典作為附件题禀。如果你想的話,可以修改附件琢岩,但是注意 JPEG 圖像進(jìn)行像素和編碼格式的優(yōu)化投剥。

[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
    ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
        CFDictionaryRef exifAttachments =
            CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
        if (exifAttachments) {
            // Do something with the attachments.對附件做些什么。
        }
        // Continue as appropriate.適當(dāng)?shù)乩^續(xù)担孔。
    }];

Showing the User What’s Being Recorded - 顯示用戶正在被記錄什么

You can provide the user with a preview of what’s being recorded by the camera (using a preview layer) or by the microphone (by monitoring the audio channel).

可以為用戶提供一個預(yù)覽江锨,關(guān)于正在被相機(jī)(使用 perview layer)記錄什么,或者被麥克風(fēng)(通過監(jiān)控音頻信道)記錄什么糕篇。

  • Video Preview - 視頻預(yù)覽

You can provide the user with a preview of what’s being recorded using an AVCaptureVideoPreviewLayer object. AVCaptureVideoPreviewLayer is a subclass ofCALayer (see Core Animation Programming Guide. You don’t need any outputs to show the preview.

使用 對象可以給用戶提供一個正在被記錄的預(yù)覽啄育。 AVCaptureVideoPreviewLayerCALayer 的子類。(詳情見 Core Animation Programming Guide)拌消,不需要任何輸出去顯示預(yù)覽挑豌。

Using the AVCaptureVideoDataOutput class provides the client application with the ability to access the video pixels before they are presented to the user.

使用 AVCaptureVideoDataOutput 類提供的訪問視頻像素才呈現(xiàn)給用戶的客戶端應(yīng)用程序的能力。

Unlike a capture output, a video preview layer maintains a strong reference to the session with which it is associated. This is to ensure that the session is not deallocated while the layer is attempting to display video. This is reflected in the way you initialize a preview layer:

與捕獲輸出不同的是墩崩,視頻預(yù)覽層與它關(guān)聯(lián)的會話有一個強引用氓英。這是為了確保會話還沒有被釋放,layer 就嘗試去顯示視頻鹦筹。這反映在铝阐,你初始化一個預(yù)覽層的方式上:

AVCaptureSession *captureSession = <#Get a capture session#>;
CALayer *viewLayer = <#Get a layer from the view in which you want to present the preview#>;
 
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];
[viewLayer addSublayer:captureVideoPreviewLayer];

In general, the preview layer behaves like any other CALayer object in the render tree (see Core Animation Programming Guide). You can scale the image and perform transformations, rotations, and so on just as you would any layer. One difference is that you may need to set the layer’s orientation property to specify how it should rotate images coming from the camera. In addition, you can test for device support for video mirroring by querying the supportsVideoMirroring property. You can set the videoMirrored property as required, although when the automaticallyAdjustsVideoMirroring property is set to YES (the default), the mirroring value is automatically set based on the configuration of the session.

在一般情況下,預(yù)覽層行為就像渲染樹中任何其他 CALayer 對象(見 Core Animation Programming Guide)铐拐∨羌可以縮放圖像和執(zhí)行轉(zhuǎn)換、旋轉(zhuǎn)等遍蟋,就像你可以在任何層吹害。一個不同點是,你可能需要設(shè)置層的 orientation 屬性來指定它應(yīng)該如何從相機(jī)中旋轉(zhuǎn)圖像虚青。此外它呀,可以通過查詢 supportsVideoMirroring 屬性來測試設(shè)備對于視頻鏡像的支持。可以根據(jù)需要設(shè)置 videoMirrored 屬性钟些,雖然當(dāng) automaticallyAdjustsVideoMirroring 屬性被設(shè)置為 YES (默認(rèn)情況下)烟号, mirroring 值是自動的基于會話配置進(jìn)行設(shè)置。

  • Video Gravity Modes - 視屏重力模式

The preview layer supports three gravity modes that you set using videoGravity:

  • AVLayerVideoGravityResizeAspect: This preserves the aspect ratio, leaving black bars where the - video does not fill the available screen area.
  • AVLayerVideoGravityResizeAspectFill: This preserves the aspect ratio, but fills the available - screen area, cropping the video when necessary.
  • AVLayerVideoGravityResize: This simply stretches the video to fill the available screen area, even if doing so distorts the image.

預(yù)覽層支持3種重力模式政恍,使用 videoGravity 設(shè)置:

  • Using “Tap to Focus” with a Preview - 使用“點擊焦點”預(yù)覽

You need to take care when implementing tap-to-focus in conjunction with a preview layer. You must account for the preview orientation and gravity of the layer, and for the possibility that the preview may be mirrored. See the sample code project AVCam-iOS: Using AVFoundation to Capture Images and Movies for an implementation of this functionality.

需要注意的是搂妻,在實現(xiàn)點擊時要注意結(jié)合預(yù)覽層蒙保。必須考慮到該層的預(yù)覽方向和重力,并考慮預(yù)覽變?yōu)殓R像顯示的可能性欲主。請看示例代碼項目:AVCam-iOS: Using AVFoundation to Capture Images and Movies邓厕,有關(guān)這個功能的實現(xiàn)。

  • Showing Audio Levels - 顯示音頻等級

To monitor the average and peak power levels in an audio channel in a capture connection, you use an AVCaptureAudioChannel object. Audio levels are not key-value observable, so you must poll for updated levels as often as you want to update your user interface (for example, 10 times a second).

在捕獲連接中檢測音頻信道的平均值和峰值功率水平扁瓢,可以使用 AVCaptureAudioChannel 對象详恼。音頻等級不是 key-value 可觀察的,所以當(dāng)你想更新你的用戶界面(比如10秒一次)引几,必須調(diào)查最新的等級昧互。

AVCaptureAudioDataOutput *audioDataOutput = <#Get the audio data output#>;
NSArray *connections = audioDataOutput.connections;
if ([connections count] > 0) {
    // There should be only one connection to an AVCaptureAudioDataOutput.
   //對于AVCaptureAudioDataOutput應(yīng)該只有一個連接。
    AVCaptureConnection *connection = [connections objectAtIndex:0];
 
    NSArray *audioChannels = connection.audioChannels;
 
    for (AVCaptureAudioChannel *channel in audioChannels) {
        float avg = channel.averagePowerLevel;
        float peak = channel.peakHoldLevel;
        // Update the level meter user interface.
       //更新水平等級用戶界面

    }
}

Putting It All Together: Capturing Video Frames as UIImage Objects - 總而言之:捕獲視頻幀用作 UIImage 對象

This brief code example to illustrates how you can capture video and convert the frames you get to UIImage objects. It shows you how to:

  • Create an AVCaptureSession object to coordinate the flow of data from an AV input device to an - output
  • Find the AVCaptureDevice object for the input type you want
  • Create an AVCaptureDeviceInput object for the device
  • Create an AVCaptureVideoDataOutput object to produce video frames
  • Implement a delegate for the AVCaptureVideoDataOutput object to process video frames
  • Implement a function to convert the CMSampleBuffer received by the delegate into a UIImage object

這個簡短的代碼示例演示了如何捕捉視頻和將幀轉(zhuǎn)化為 UIImage 對象伟桅,下面說明方法:

  • 創(chuàng)建一個 AVCaptureSession 對象去協(xié)調(diào)從 AV 輸入設(shè)備到輸出設(shè)備的數(shù)據(jù)流敞掘。
  • 找到你想要輸入類型的 AVCaptureDevice 對象。
  • 為設(shè)備創(chuàng)建一個 AVCaptureDeviceInput 對象楣铁。
  • 創(chuàng)建一個 AVCaptureVideoDataOutput 去生成視頻幀玖雁。
  • AVCaptureVideoDataOutput 實現(xiàn)代理去處理視頻幀。
  • 實現(xiàn)一個函數(shù)民褂,將從代理收到的 CMSampleBuffer 轉(zhuǎn)換為一個 UIImage 對象茄菊。

Note: To focus on the most relevant code, this example omits several aspects of a complete application, including memory management. To use AV Foundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces.

注意:關(guān)注最相關(guān)的代碼,這個例子省略了一個完成程序的幾部分赊堪,包括內(nèi)存管理。為了使用 AV Foundation哭廉,你應(yīng)該有足夠的 Cocoa 經(jīng)驗脊僚,有能力推斷出丟失的碎片。

  • Create and Configure a Capture Session - 創(chuàng)建和配置捕獲會話

You use an AVCaptureSession object to coordinate the flow of data from an AV input device to an output. Create a session, and configure it to produce medium-resolution video frames.

使用 AVCaptureSession 對象去協(xié)調(diào)從 AV 輸入設(shè)備到輸出的數(shù)據(jù)流。創(chuàng)建一個會話辽幌,并將其配置產(chǎn)生中等分辨率的視頻幀增淹。

AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
  • Create and Configure the Device and Device Input - 創(chuàng)建和配置設(shè)備記憶設(shè)備輸入

Capture devices are represented by AVCaptureDevice objects; the class provides methods to retrieve an object for the input type you want. A device has one or more ports, configured using an AVCaptureInput object. Typically, you use the capture input in its default configuration.

Find a video capture device, then create a device input with the device and add it to the session. If an appropriate device can not be located, then the deviceInputWithDevice:error: method will return an error by reference.

AVCaptureDevice 對象表示捕獲設(shè)備;類提供你想要的輸入類型對象的方法乌企。一個設(shè)備具有一個或者多個端口虑润,使用 AVCaptureInput 對象配置。通常情況下加酵,在它的默認(rèn)配置中使用捕獲輸入拳喻。

找到一個視頻捕獲設(shè)備,然后創(chuàng)建一個帶著設(shè)備的設(shè)備輸入猪腕,并將其添加到會話中冗澈,如果合適的設(shè)備無法定位,然后 deviceInputWithDevice:error: 方法將會通過引用返回一個錯誤陋葡。

AVCaptureDevice *device =
        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
 
NSError *error = nil;
AVCaptureDeviceInput *input =
        [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
    // Handle the error appropriately.
}
[session addInput:input];
  • Create and Configure the Video Data Output - 創(chuàng)建和配置視頻數(shù)據(jù)輸出

You use an AVCaptureVideoDataOutput object to process uncompressed frames from the video being captured. You typically configure several aspects of an output. For video, for example, you can specify the pixel format using the videoSettings property and cap the frame rate by setting the minFrameDuration property.

Create and configure an output for video data and add it to the session; cap the frame rate to 15 fps by setting the minFrameDuration property to 1/15 second:

使用 AVCaptureVideoDataOutput 對象去處理視頻捕獲過程中未被壓縮的幀亚亲。通常配置輸出的幾個方面。例如視頻腐缤,可以使用 videoSettings 屬性指定像素格式捌归,通過設(shè)置 minFrameDuration 屬性覆蓋幀速率。

為視頻數(shù)據(jù)創(chuàng)建和配置輸出柴梆,并將其添加到會話中陨溅;通過設(shè)置 minFrameDuration 屬性為每秒 1/15,將幀速率覆蓋為 15 fps 绍在。

AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings =
                @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };
output.minFrameDuration = CMTimeMake(1, 15);

The data output object uses delegation to vend the video frames. The delegate must adopt the AVCaptureVideoDataOutputSampleBufferDelegate protocol. When you set the data output’s delegate, you must also provide a queue on which callbacks should be invoked.

數(shù)據(jù)輸出對象使用委托來聲明一個視頻幀门扇。代理必須 AVCaptureVideoDataOutputSampleBufferDelegate 協(xié)議。當(dāng)你設(shè)置了數(shù)據(jù)輸出的代理偿渡,還必須提供一個回調(diào)時應(yīng)該被調(diào)用的隊列臼寄。

dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);

You use the queue to modify the priority given to delivering and processing the video frames.

使用隊列去修改給定傳輸和處理視頻幀的優(yōu)先級。

  • Implement the Sample Buffer Delegate Method - 實現(xiàn)示例緩沖代理方法

In the delegate class, implement the method (captureOutput:didOutputSampleBuffer:fromConnection:) that is called when a sample buffer is written. The video data output object delivers frames as CMSampleBuffer opaque types, so you need to convert from the CMSampleBuffer opaque type to a UIImage object. The function for this operation is shown in Converting CMSampleBuffer to a UIImage Object.

在代理類溜宽,實現(xiàn)方法(captureOutput:didOutputSampleBuffer:fromConnection:)吉拳,當(dāng)樣本緩沖寫入時被調(diào)用。視頻數(shù)據(jù)輸出對象傳遞了 CMSampleBuffer 不透明類型的幀适揉,所以你需要從 CMSampleBuffer 不透明類型轉(zhuǎn)化為一個 UIImage 對象留攒。這個操作的功能在 Converting CMSampleBuffer to a UIImage Object 中展示。

- (void)captureOutput:(AVCaptureOutput *)captureOutput
         didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
         fromConnection:(AVCaptureConnection *)connection {
 
    UIImage *image = imageFromSampleBuffer(sampleBuffer);
    // Add your code here that uses the image.在這里添加使用圖片的代碼嫉嘀。
} 

Remember that the delegate method is invoked on the queue you specified in setSampleBufferDelegate:queue:; if you want to update the user interface, you must invoke any relevant code on the main thread.

記住炼邀,代理方法是在 setSampleBufferDelegate:queue: 中你指定的隊列中調(diào)用;如果你想要更新用戶界面剪侮,必須在主線程上調(diào)用任何相關(guān)代碼拭宁。

  • Starting and Stopping Recording - 啟動和停止錄制

After configuring the capture session, you should ensure that the camera has permission to record according to the user’s preferences.

在配置捕獲會話后,應(yīng)該確保相機(jī)根據(jù)用戶的首相選具有錄制的權(quán)限。

NSString *mediaType = AVMediaTypeVideo;
 
[AVCaptureDevice requestAccessForMediaType:mediaType completionHandler:^(BOOL granted) {
    if (granted)
    {
        //Granted access to mediaType
       //授予對mediaType的訪問權(quán)限
        [self setDeviceAuthorized:YES];
    }
    else
    {
        //Not granted access to mediaType
       //不授予對mediaType的訪問權(quán)限
        dispatch_async(dispatch_get_main_queue(), ^{
        [[[UIAlertView alloc] initWithTitle:@"AVCam!"
                                    message:@"AVCam doesn't have permission to use Camera, please change privacy settings"
                                   delegate:self
                          cancelButtonTitle:@"OK"
                          otherButtonTitles:nil] show];
                [self setDeviceAuthorized:NO];
        });
    }
}];

If the camera session is configured and the user has approved access to the camera (and if required, the microphone), send a startRunning message to start the recording.

如果相機(jī)會話被配置杰标,用戶批準(zhǔn)訪問攝像頭(如果需要兵怯,麥克風(fēng)),發(fā)送 startRunning 消息開始錄制腔剂。

Important: The startRunning method is a blocking call which can take some time, therefore you should perform session setup on a serial queue so that the main queue isn’t blocked (which keeps the UI responsive). See AVCam-iOS: Using AVFoundation to Capture Images and Movies for the canonical implementation example.

重點:startRunning 方法正在阻塞調(diào)用時媒区,可能需要一些時間,因此你應(yīng)該在串行隊列執(zhí)行會話建立掸犬,為了主隊列不被堵塞(使UI相應(yīng))驻仅。見 AVCam-iOS: Using AVFoundation to Capture Images and Movies ,典型實現(xiàn)的例子登渣。

[session startRunning];

To stop recording, you send the session a stopRunning message.

要停止錄制,給會話發(fā)送一個 stopRunning 消息毡泻。

High Frame Rate Video Capture - 高幀速率視頻捕獲

iOS 7.0 introduces high frame rate video capture support (also referred to as “SloMo” video) on selected hardware. The full AVFoundation framework supports high frame rate content.

You determine the capture capabilities of a device using the AVCaptureDeviceFormat class. This class has methods that return the supported media types, frame rates, field of view, maximum zoom factor, whether video stabilization is supported, and more.

  • Capture supports full 720p (1280 x 720 pixels) resolution at 60 frames per second (fps) including - video stabilization and droppable P-frames (a feature of H264 encoded movies, which allow the - movies to play back smoothly even on slower and older hardware.)
  • Playback has enhanced audio support for slow and fast playback, allowing the time pitch of the - audio can be preserved at slower or faster speeds.
  • Editing has full support for scaled edits in mutable compositions.
  • Export provides two options when supporting 60 fps movies. The variable frame rate, slow or fast motion, can be preserved, or the movie and be converted to an arbitrary slower frame rate such as 30 frames per second.

The SloPoke sample code demonstrates the AVFoundation support for fast video capture, determining whether hardware supports high frame rate video capture, playback using various rates and time pitch algorithms, and editing (including setting time scales for portions of a composition).

iOS 7 在特定的硬件中胜茧,引入了高幀速率的視頻捕獲支持(也被稱為 “SloMo” 視頻)。所有的 AVFoundation 框架都支持高幀速率內(nèi)容仇味。

使用 AVCaptureDeviceFormat 類確定設(shè)備的捕獲能力呻顽。該類有一個方法,返回支持媒體類型丹墨、幀速率廊遍、視圖因子、最大縮放因子贩挣,是否支持視頻穩(wěn)定性等等喉前。

  • 捕獲完全支持每秒60幀的 720p (1280 x 720像素)分辨率,包括視頻穩(wěn)定性和可棄用的幀間編碼( H264編碼特征的電影王财,使得電影甚至在更慢更老的硬件也能很順暢的播放)
  • 播放增強了對于慢速和快速播放的音頻支持卵迂,允許音頻的時間間距可以被保存在較慢或者更快的速度。
  • 編輯已全面支持規(guī)娜蘧唬可變的組成編輯见咒。
  • 當(dāng)支持60fps電影,出口提供了兩種選擇挂疆「睦溃可變的幀速率,緩慢或者快速的移動缤言,可以保存宝当,或者電影可以被轉(zhuǎn)換為一個任意的較慢的幀速率,比如每秒30幀墨闲。

SloPoke 示例代碼演示了 AVFoundation 支持快速視頻捕獲今妄,確定硬件是否支持高幀速率視頻采集,使用不同速率和時間間距算法播放、編輯(包括設(shè)置為一個組件一部分的時間尺度)盾鳞。

  • Playback - 播放

An instance of AVPlayer manages most of the playback speed automatically by setting the setRate: method value. The value is used as a multiplier for the playback speed. A value of 1.0 causes normal playback, 0.5 plays back at half speed, 5.0 plays back five times faster than normal, and so on.

AVPlayer 的實例通過設(shè)置 setRate: 方法值犬性,自動管理了大部分的播放速度。值被當(dāng)做播放速度的乘法器使用腾仅。值為 1.0 是正常播放乒裆,0.5 是播放速度的一半,5.0 表示播放速度是正常速度的5倍推励,等等鹤耍。

The AVPlayerItem object supports the audioTimePitchAlgorithm property. This property allows you to specify how audio is played when the movie is played at various frame rates using the Time Pitch Algorithm Settings constants.

AVPlayerItem 對象支持 audioTimePitchAlgorithm 屬性。此屬性允許你指定在使用時距算法設(shè)置常量播放不同的幀速率的電影時验辞,音頻的播放方式稿黄。

The following table shows the supported time pitch algorithms, the quality, whether the algorithm causes the audio to snap to specific frame rates, and the frame rate range that each algorithm supports.

下表顯示了支持的時距算法、質(zhì)量跌造,該算法是否會導(dǎo)致音頻突然跳到特定的幀速率杆怕,以及每個算法支持的幀速率范圍。

| Time pitch algorithm | Quality | Snaps to specific frame rate | Rate range |
| AVAudioTimePitchAlgorithmLowQualityZeroLatency | Low quality, suitable for fast-forward, rewind, or low quality voice. | YES | 0.5, 0.666667, 0.8, 1.0, 1.25, 1.5, 2.0 rates. |
| AVAudioTimePitchAlgorithmTimeDomain | Modest quality, less expensive computationally, suitable for voice. | NO | 0.5–2x rates. |
| AVAudioTimePitchAlgorithmSpectral | Highest quality, most expensive computationally, preserves the pitch of the original item. | NO | 1/32–32 rates. |
| AVAudioTimePitchAlgorithmVarispeed | High-quality playback with no pitch correction. | NO | 1/32–32 rates. |

  • Editing - 編輯

When editing, you use the AVMutableComposition class to build temporal edits.

  • Create a new AVMutableComposition instance using the composition class method.
  • Insert your video asset using the insertTimeRange:ofAsset:atTime:error: method.
  • Set the time scale of a portion of the composition using scaleTimeRange:toDuration:

當(dāng)編輯時壳贪,使用 AVMutableComposition 類去建立時間編輯陵珍。

  • Export - 出口

Exporting 60 fps video uses the AVAssetExportSession class to export an asset. The content can be exported using two techniques:

Use the AVAssetExportPresetPassthrough preset to avoid reencoding the movie. It retimes the media with the sections of the media tagged as section 60 fps, section slowed down, or section sped up.

Use a constant frame rate export for maximum playback compatibility. Set the frameDuration property of the video composition to 30 fps. You can also specify the time pitch by using setting the export session’s audioTimePitchAlgorithm property.

使用 AVAssetExportSession 類將 60fps 的視頻導(dǎo)出到資產(chǎn)。該內(nèi)容可以使用兩種技術(shù)導(dǎo)出:

使用 AVAssetExportPresetPassthrough 預(yù)設(shè)磕蒲,避免將電影重新編碼留潦。它重新定時媒體,將媒體部分標(biāo)記為 60fps 的部分亿卤,緩慢的部分或者加速的部分愤兵。

使用恒定的幀速率導(dǎo)出最大播放兼容性。設(shè)置視頻組件的 frameDuration 屬性為 30fps 排吴。也可以通過設(shè)置導(dǎo)出會話的 audioTimePitchAlgorithm 屬性指定時間間距秆乳。

  • Recording - 錄制

You capture high frame rate video using the AVCaptureMovieFileOutput class, which automatically supports high frame rate recording. It will automatically select the correct H264 pitch level and bit rate.

To do custom recording, you must use the AVAssetWriter class, which requires some additional setup.

使用 AVCaptureMovieFileOutput 類捕獲高幀速率的視頻,該類自動支持高幀率錄制钻哩。它會自動選擇正確的 H264 的高音和比特率屹堰。

做定制的錄制,必須使用 AVAssetWriter 類街氢,這需要一些額外的設(shè)置扯键。

assetWriterInput.expectsMediaDataInRealTime=YES;

This setting ensures that the capture can keep up with the incoming data.

此設(shè)置確保捕獲可以跟上傳入的數(shù)據(jù)。

參考文獻(xiàn):
Yofer Zhang的博客
AVFoundation的蘋果官網(wǎng)

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末珊肃,一起剝皮案震驚了整個濱河市荣刑,隨后出現(xiàn)的幾起案子馅笙,更是在濱河造成了極大的恐慌,老刑警劉巖厉亏,帶你破解...
    沈念sama閱讀 216,324評論 6 498
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件董习,死亡現(xiàn)場離奇詭異,居然都是意外死亡爱只,警方通過查閱死者的電腦和手機(jī)皿淋,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,356評論 3 392
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來恬试,“玉大人窝趣,你說我怎么就攤上這事⊙挡瘢” “怎么了哑舒?”我有些...
    開封第一講書人閱讀 162,328評論 0 353
  • 文/不壞的土叔 我叫張陵,是天一觀的道長幻馁。 經(jīng)常有香客問我散址,道長,這世上最難降的妖魔是什么宣赔? 我笑而不...
    開封第一講書人閱讀 58,147評論 1 292
  • 正文 為了忘掉前任,我火速辦了婚禮瞪浸,結(jié)果婚禮上儒将,老公的妹妹穿的比我還像新娘。我一直安慰自己对蒲,他們只是感情好钩蚊,可當(dāng)我...
    茶點故事閱讀 67,160評論 6 388
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著蹈矮,像睡著了一般砰逻。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上泛鸟,一...
    開封第一講書人閱讀 51,115評論 1 296
  • 那天蝠咆,我揣著相機(jī)與錄音,去河邊找鬼北滥。 笑死刚操,一個胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的再芋。 我是一名探鬼主播菊霜,決...
    沈念sama閱讀 40,025評論 3 417
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼济赎!你這毒婦竟也來了鉴逞?” 一聲冷哼從身側(cè)響起记某,我...
    開封第一講書人閱讀 38,867評論 0 274
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎构捡,沒想到半個月后液南,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,307評論 1 310
  • 正文 獨居荒郊野嶺守林人離奇死亡叭喜,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,528評論 2 332
  • 正文 我和宋清朗相戀三年贺拣,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片捂蕴。...
    茶點故事閱讀 39,688評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡譬涡,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出啥辨,到底是詐尸還是另有隱情涡匀,我是刑警寧澤,帶...
    沈念sama閱讀 35,409評論 5 343
  • 正文 年R本政府宣布溉知,位于F島的核電站陨瘩,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏级乍。R本人自食惡果不足惜舌劳,卻給世界環(huán)境...
    茶點故事閱讀 41,001評論 3 325
  • 文/蒙蒙 一捌议、第九天 我趴在偏房一處隱蔽的房頂上張望朋蔫。 院中可真熱鬧皆的,春花似錦秘噪、人聲如沸健民。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,657評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽甘桑。三九已至焙贷,卻和暖如春撵割,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背辙芍。 一陣腳步聲響...
    開封第一講書人閱讀 32,811評論 1 268
  • 我被黑心中介騙來泰國打工啡彬, 沒想到剛下飛機(jī)就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人故硅。 一個月前我還...
    沈念sama閱讀 47,685評論 2 368
  • 正文 我出身青樓外遇,卻偏偏與公主長得像,于是被迫代替她去往敵國和親契吉。 傳聞我的和親對象是個殘疾皇子跳仿,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 44,573評論 2 353

推薦閱讀更多精彩內(nèi)容