Use the information provided by a face tracking AR session to place and animate 3D content.
使用人臉跟蹤AR會(huì)話提供的信息放置和動(dòng)畫(huà)3D內(nèi)容。
Overview
This sample app presents a simple interface allowing you to choose between four augmented reality (AR) visualizations on devices with a TrueDepth front-facing camera (see?iOS Device Compatibility Reference).
1.The camera view alone, without any AR content.
2.The face mesh provided by ARKit, with automatic estimation of the real-world directional lighting environment.
3.Virtual 3D content that appears to attach to (and be obscured by parts of) the user’s real face.
4.A simple robot character whose facial expression is animated to match that of the user.
Use the “+” button in the sample app to switch between these modes, as shown below.
概述
此示例應(yīng)用程序提供了一個(gè)簡(jiǎn)單的界面混滔,允許您通過(guò)TrueDepth前置攝像頭在設(shè)備上選擇四個(gè)增強(qiáng)現(xiàn)實(shí)(AR)可視化臉模型(請(qǐng)參閱iOS設(shè)備兼容性參考)遍坟。
1.單獨(dú)攝像頭視圖愿伴,沒(méi)有任何AR內(nèi)容隔节。
2. ARKit提供的人臉網(wǎng)格,可自動(dòng)估計(jì)真實(shí)世界的定向照明環(huán)境瘾晃。
3.虛擬3D內(nèi)容似乎附著于用戶的真實(shí)臉部(并被其部分遮擋)蹦误。
4.一個(gè)簡(jiǎn)單的機(jī)器人角色肉津,其面部表情動(dòng)畫(huà)以匹配用戶的面部表情妹沙。
使用示例應(yīng)用程序中的“+”按鈕在這些模式之間切換距糖,如下所示。
Start a Face Tracking Session in a SceneKit View
Like other uses of ARKit, face tracking requires configuring and running a session (an?ARSession?object) and rendering the camera image together with virtual content in a view. For more detailed explanations of session and view setup, see?About Augmented Reality and ARKitand?Building Your First AR Experience. This sample uses SceneKit to display an AR experience, but you can also use SpriteKit or build your own renderer using Metal (see?ARSKView?and?Displaying an AR Experience with Metal).
Face tracking differs from other uses of ARKit in the class you use to configure the session. To enable face tracking, create an instance of?ARFaceTrackingConfiguration, configure its properties, and pass it to the?runWithConfiguration:options:?method of the AR session associated with your view, as shown below.
在SceneKit視圖中啟動(dòng)臉部跟蹤會(huì)話
和其他ARKit使用一樣,人臉追蹤需要配置和運(yùn)行一個(gè)Session(一個(gè)ARSession對(duì)象)吗铐,并將相機(jī)圖像與視圖中的虛擬內(nèi)容一起渲染唬渗。 有關(guān)Session和視圖設(shè)置的更詳細(xì)說(shuō)明,請(qǐng)參閱關(guān)于增強(qiáng)現(xiàn)實(shí)和ARKit并且構(gòu)建您的第一個(gè)AR體驗(yàn)壮啊。 此示例使用SceneKit顯示AR體驗(yàn)歹啼,但您也可以使用SpriteKit或使用Metal構(gòu)建您自己的渲染器(請(qǐng)參閱ARSKView和顯示帶金屬的AR體驗(yàn))狸眼。
臉部跟蹤與用于配置Session的類中的其他ARKit用法不同拓萌。 要啟用人臉跟蹤升略,請(qǐng)創(chuàng)建ARFaceTrackingConfiguration實(shí)例,配置其屬性炕倘,并將其傳遞給與您的視圖關(guān)聯(lián)的AR會(huì)話的runWithConfiguration:options:方法罩旋,如下所示。
Before offering your user features that require a face tracking AR session, check the?isSupported?property on the?ARFaceTrackingConfiguration?class to determine whether the current device supports ARKit face tracking.
在提供需要人臉跟蹤AR Session的用戶功能之前,請(qǐng)檢查ARFaceTrackingConfiguration類的isSupported屬性东帅,以確定當(dāng)前設(shè)備是否支持ARKit人臉跟蹤靠闭。
Track the Position and Orientation of a Face
When face tracking is active, ARKit automatically adds?ARFaceAnchor?objects to the running AR session, containing information about the user’s face, including its position and orientation.
Note
ARKit detects and provides information about only one user’s face. If multiple faces are present in the camera image, ARKit chooses the largest or most clearly recognizable face.
In a SceneKit-based AR experience, you can add 3D content corresponding to a face anchor in the?renderer:didAddNode:forAnchor:?method (from the?ARSCNViewDelegateprotocol). ARKit adds a SceneKit node for the anchor, and updates that node’s position and orientation on each frame, so any SceneKit content you add to that node automatically follows the position and orientation of the user’s face.
跟蹤人臉的位置和方向
面部跟蹤處于活動(dòng)狀態(tài)時(shí)愧膀,ARKit會(huì)自動(dòng)將ARFaceAnchor對(duì)象添加到正在運(yùn)行的AR Session中檩淋,其中包含有關(guān)用戶面部的信息蟀悦,包括其位置和方向日戈。
注意
ARKit檢測(cè)并提供關(guān)于一個(gè)用戶臉部的信息浙炼。 如果相機(jī)圖像中存在多個(gè)人臉唯袄,ARKit會(huì)選擇最大或最清晰可識(shí)別的人臉恋拷。
在基于SceneKit的AR體驗(yàn)中,可以在渲染器中添加與面部錨點(diǎn)對(duì)應(yīng)的3D內(nèi)容:didAddNode:forAnchor:method(來(lái)自ARSCNViewDelegate協(xié)議)酌住。 ARKit為定位點(diǎn)添加一個(gè)SceneKit節(jié)點(diǎn),并更新該節(jié)點(diǎn)在每個(gè)框架上的位置和方向消痛,因此添加到該節(jié)點(diǎn)的任何SceneKit內(nèi)容都會(huì)自動(dòng)跟隨用戶臉部的位置和方向秩伞。
In this example, the?renderer:didAddNode:forAnchor:?method calls the?setupFaceNodeContent?method to add SceneKit content to the?faceNode. For example, if you change the?showsCoordinateOrigin?variable in the sample code, the app adds a visualization of the x/y/z axes to the node, indicating the origin of the face anchor’s coordinate system.
在此示例中纱新,渲染器:didAddNode:forAnchor:方法調(diào)用setupFaceNodeContent方法以將SceneKit內(nèi)容添加到faceNode脸爱。 例如簿废,如果您在示例代碼中更改showsCoordinateOrigin變量络它,則該應(yīng)用程序?qū) / y / z軸的可視化文件添加到節(jié)點(diǎn)化戳,以指示面部錨點(diǎn)坐標(biāo)系的原點(diǎn)。
Use Face Geometry to Model the User’s Face
ARKit provides a coarse 3D mesh geometry matching the size, shape, topology, and current facial expression of the user’s face. ARKit also provides the?ARSCNFaceGeometry?class, offering an easy way to visualize this mesh in SceneKit.
Your AR experience can use this mesh to place or draw content that appears to attach to the face. For example, by applying a semitransparent texture to this geometry you could paint virtual tattoos or makeup onto the user’s skin.
To create a SceneKit face geometry, initialize an?ARSCNFaceGeometry?object with the Metal device your SceneKit view uses for rendering:
使用面幾何來(lái)建模用戶的面部
ARKit提供與用戶臉部的大小扫尖,形狀藏斩,拓?fù)浜彤?dāng)前面部表情相匹配的粗糙三維網(wǎng)格幾何圖形却盘。 ARKit還提供了ARSCNFaceGeometry類黄橘,提供了一種在SceneKit中可視化該網(wǎng)格的簡(jiǎn)單方法塞关。
您的AR體驗(yàn)可以使用此網(wǎng)格來(lái)放置或繪制看起來(lái)附著在臉上的內(nèi)容。 例如小压,通過(guò)對(duì)此幾何圖形應(yīng)用半透明紋理,您可以在用戶的皮膚上繪制虛擬紋身或化妝仪搔。
要?jiǎng)?chuàng)建SceneKit面幾何蜻牢,請(qǐng)使用SceneKit視圖用于渲染的Metal設(shè)備初始化ARSCNFaceGeometry對(duì)象:
The sample code’s?setupFaceNodeContent?method (mentioned above) adds a node containing the face geometry to the scene. By making that node a child of the node provided by the face anchor, the face model automatically tracks the position and orientation of the user’s face.
To also make the face model onscreen conform to the shape of the user’s face, even as the user blinks, talks, and makes various facial expressions, you need to retrieve an updated face mesh in the?renderer:didUpdateNode:forAnchor:?delegate callback.
示例代碼的setupFaceNodeContent方法(如上所述)將包含面幾何的節(jié)點(diǎn)添加到場(chǎng)景中煮嫌。 通過(guò)使該節(jié)點(diǎn)成為面部錨點(diǎn)提供的節(jié)點(diǎn)的子節(jié)點(diǎn)昌阿,面部模型自動(dòng)跟蹤用戶面部的位置和方向恳邀。
為了使屏幕上的臉部模型符合用戶臉部的形狀,即使用戶在閃爍,說(shuō)話并制作各種臉部表情時(shí)鳄抒,也需要在呈現(xiàn)器中檢索更新后的臉部網(wǎng)格:didUpdateNode:forAnchor:delegate回調(diào)椰弊。
Then, update the?ARSCNFaceGeometry?object in your scene to match by passing the new face mesh to its?updateFromFaceGeometry:?method:
然后秉版,通過(guò)將新面部網(wǎng)格傳遞給其updateFromFaceGeometry:方法來(lái)更新場(chǎng)景中的ARSCNFaceGeometry對(duì)象以進(jìn)行匹配:
Place 3D Content on the User’s Face
Another use of the face mesh that ARKit provides is to create?occlusion geometry?in your scene. An occlusion geometry is a 3D model that doesn’t render any visible content (allowing the camera image to show through), but obstructs the camera’s view of other virtual content in the scene.
This technique creates the illusion that the real face interacts with virtual objects, even though the face is a 2D camera image and the virtual content is a rendered 3D object. For example, if you place an occlusion geometry and virtual glasses on the user’s face, the face can obscure the frame of the glasses.
To create an occlusion geometry for the face, start by creating an?ARSCNFaceGeometryobject as in the previous example. However, instead of configuring that object’s SceneKit material with a visible appearance, set the material to render depth but not color during rendering:
將3D內(nèi)容放置在用戶的臉上
ARKit提供的臉部網(wǎng)格的另一個(gè)用途是在場(chǎng)景中創(chuàng)建遮擋幾何體并蝗。 遮擋幾何是一種3D模型秸妥,它不會(huì)呈現(xiàn)任何可見(jiàn)內(nèi)容(允許攝像機(jī)圖像顯示)粥惧,但會(huì)阻擋相機(jī)查看場(chǎng)景中其他虛擬內(nèi)容的視圖。
即使臉部是2D照相機(jī)圖像起惕,而虛擬內(nèi)容是渲染的3D對(duì)象,該技術(shù)也會(huì)創(chuàng)建真實(shí)臉部與虛擬對(duì)象交互的錯(cuò)覺(jué)问词。 例如勺馆,如果您在用戶的臉上放置遮擋幾何圖形和虛擬眼鏡草穆,則臉部可能會(huì)遮擋眼鏡的框架悲柱。
要為面部創(chuàng)建遮擋幾何圖形,請(qǐng)先按照前面的示例創(chuàng)建ARSCNFaceGeometry對(duì)象嘿般。 但是涯冠,不要使用可見(jiàn)外觀配置該對(duì)象的SceneKit材質(zhì)蛇更,而在渲染過(guò)程中應(yīng)將材質(zhì)設(shè)置為渲染深度而不是顏色:
Because the material renders depth, other objects rendered by SceneKit correctly appear in front of it or behind it. But because the material doesn’t render color, the camera image appears in its place. The sample app combines this technique with a SceneKit object positioned in front of the user’s eyes, creating an effect where the object is realistically obscured by the user’s nose.
由于材質(zhì)渲染深度派任,SceneKit渲染的其他對(duì)象正確顯示在其前面或后面。 但由于材質(zhì)不呈現(xiàn)顏色师逸,相機(jī)圖像顯示在其位置上豆混。 示例應(yīng)用程序?qū)⒋思夹g(shù)與位于用戶眼睛前方的SceneKit對(duì)象結(jié)合在一起,從而創(chuàng)建一種效果遗淳,使用者的鼻子逼真地遮擋物體心傀。
Animate a Character with Blend Shapes
In addition to the face mesh shown in the above examples, ARKit also provides a more abstract model of the user’s facial expressions in the form of a?blendShapes?dictionary. You can use the named coefficient values in this dictionary to control the animation parameters of your own 2D or 3D assets, creating a character (such as an avatar or puppet) that follows the user’s real facial movements and expressions.
As a basic demonstration of blend shape animation, this sample includes a simple model of a robot character’s head, created using SceneKit primitive shapes. (See the?robotHead.scn?file in the source code.)
To get the user’s current facial expression, read the?blendShapes?dictionary from the face anchor in the?renderer:didUpdateNode:forAnchor:?delegate callback:
使用混合形狀動(dòng)畫(huà)人物
除了上面示例中顯示的人臉網(wǎng)格外养叛,ARKit還以blendShapes字典的形式提供了用戶面部表情的更抽象模型。 您可以使用此字典中的命名系數(shù)值來(lái)控制您自己的2D或3D資源的動(dòng)畫(huà)參數(shù)弃甥,創(chuàng)建一個(gè)跟隨用戶真實(shí)面部動(dòng)作和表情的角色(例如化身或木偶)淆攻。
作為混合形狀動(dòng)畫(huà)的基本演示,此示例包含一個(gè)機(jī)器人角色頭部的簡(jiǎn)單模型啸箫,使用SceneKit原始形狀創(chuàng)建忘苛。 (請(qǐng)參閱源代碼中的robotHead.scn文件唱较。)
要獲取用戶的當(dāng)前面部表情南缓,請(qǐng)從渲染器中的face anchor中讀取blendShapes字典:didUpdateNode:forAnchor:delegate callback:
Then, examine the key-value pairs in that dictionary to calculate animation parameters for your model. There are 52 unique?ARBlendShapeLocation?coefficients. Your app can use as few or as many of them as neccessary to create the artistic effect you want. In this sample, the?RobotHead?class performs this calculation, mapping the?ARBlendShapeLocationEyeBlinkLeft?and?ARBlendShapeLocationEyeBlinkRight?parameters to one axis of the?scale?factor of the robot’s eyes, and the?ARBlendShapeLocationJawOpen?parameter to offset the position of the robot’s jaw.
然后狐榔,檢查該字典中的鍵值對(duì)获雕,以計(jì)算模型的動(dòng)畫(huà)參數(shù)届案。 有52個(gè)獨(dú)特的ARBlendShapeLocation系數(shù)楣颠。 您的應(yīng)用程序可以使用盡可能少的或盡可能多的必要條件來(lái)創(chuàng)建您想要的藝術(shù)效果咐蚯。 在此示例中春锋,RobotHead類執(zhí)行此計(jì)算,將ARBlendShapeLocationEyeBlinkLeft和ARBlendShapeLocationEyeBlinkRight參數(shù)映射到機(jī)器人眼睛比例因子的一個(gè)軸侧馅,并使用ARBlendShapeLocationJawOpen參數(shù)來(lái)偏移機(jī)器人頜部的位置馁痴。