Creating Face-Based AR Experiences

Use the information provided by a face tracking AR session to place and animate 3D content.
使用人臉跟蹤AR會(huì)話提供的信息放置和動(dòng)畫(huà)3D內(nèi)容。


Overview

This sample app presents a simple interface allowing you to choose between four augmented reality (AR) visualizations on devices with a TrueDepth front-facing camera (see?iOS Device Compatibility Reference).

1.The camera view alone, without any AR content.

2.The face mesh provided by ARKit, with automatic estimation of the real-world directional lighting environment.

3.Virtual 3D content that appears to attach to (and be obscured by parts of) the user’s real face.

4.A simple robot character whose facial expression is animated to match that of the user.

Use the “+” button in the sample app to switch between these modes, as shown below.


概述

此示例應(yīng)用程序提供了一個(gè)簡(jiǎn)單的界面混滔,允許您通過(guò)TrueDepth前置攝像頭在設(shè)備上選擇四個(gè)增強(qiáng)現(xiàn)實(shí)(AR)可視化臉模型(請(qǐng)參閱iOS設(shè)備兼容性參考)遍坟。

1.單獨(dú)攝像頭視圖愿伴,沒(méi)有任何AR內(nèi)容隔节。

2. ARKit提供的人臉網(wǎng)格,可自動(dòng)估計(jì)真實(shí)世界的定向照明環(huán)境瘾晃。

3.虛擬3D內(nèi)容似乎附著于用戶的真實(shí)臉部(并被其部分遮擋)蹦误。

4.一個(gè)簡(jiǎn)單的機(jī)器人角色肉津,其面部表情動(dòng)畫(huà)以匹配用戶的面部表情妹沙。

使用示例應(yīng)用程序中的“+”按鈕在這些模式之間切換距糖,如下所示。

Start a Face Tracking Session in a SceneKit View

Like other uses of ARKit, face tracking requires configuring and running a session (an?ARSession?object) and rendering the camera image together with virtual content in a view. For more detailed explanations of session and view setup, see?About Augmented Reality and ARKitand?Building Your First AR Experience. This sample uses SceneKit to display an AR experience, but you can also use SpriteKit or build your own renderer using Metal (see?ARSKView?and?Displaying an AR Experience with Metal).

Face tracking differs from other uses of ARKit in the class you use to configure the session. To enable face tracking, create an instance of?ARFaceTrackingConfiguration, configure its properties, and pass it to the?runWithConfiguration:options:?method of the AR session associated with your view, as shown below.

在SceneKit視圖中啟動(dòng)臉部跟蹤會(huì)話

和其他ARKit使用一樣,人臉追蹤需要配置和運(yùn)行一個(gè)Session(一個(gè)ARSession對(duì)象)吗铐,并將相機(jī)圖像與視圖中的虛擬內(nèi)容一起渲染唬渗。 有關(guān)Session和視圖設(shè)置的更詳細(xì)說(shuō)明,請(qǐng)參閱關(guān)于增強(qiáng)現(xiàn)實(shí)和ARKit并且構(gòu)建您的第一個(gè)AR體驗(yàn)壮啊。 此示例使用SceneKit顯示AR體驗(yàn)歹啼,但您也可以使用SpriteKit或使用Metal構(gòu)建您自己的渲染器(請(qǐng)參閱ARSKView和顯示帶金屬的AR體驗(yàn))狸眼。

臉部跟蹤與用于配置Session的類中的其他ARKit用法不同拓萌。 要啟用人臉跟蹤升略,請(qǐng)創(chuàng)建ARFaceTrackingConfiguration實(shí)例,配置其屬性炕倘,并將其傳遞給與您的視圖關(guān)聯(lián)的AR會(huì)話的runWithConfiguration:options:方法罩旋,如下所示。

Before offering your user features that require a face tracking AR session, check the?isSupported?property on the?ARFaceTrackingConfiguration?class to determine whether the current device supports ARKit face tracking.

在提供需要人臉跟蹤AR Session的用戶功能之前,請(qǐng)檢查ARFaceTrackingConfiguration類的isSupported屬性东帅,以確定當(dāng)前設(shè)備是否支持ARKit人臉跟蹤靠闭。

Track the Position and Orientation of a Face

When face tracking is active, ARKit automatically adds?ARFaceAnchor?objects to the running AR session, containing information about the user’s face, including its position and orientation.

Note

ARKit detects and provides information about only one user’s face. If multiple faces are present in the camera image, ARKit chooses the largest or most clearly recognizable face.

In a SceneKit-based AR experience, you can add 3D content corresponding to a face anchor in the?renderer:didAddNode:forAnchor:?method (from the?ARSCNViewDelegateprotocol). ARKit adds a SceneKit node for the anchor, and updates that node’s position and orientation on each frame, so any SceneKit content you add to that node automatically follows the position and orientation of the user’s face.

跟蹤人臉的位置和方向

面部跟蹤處于活動(dòng)狀態(tài)時(shí)愧膀,ARKit會(huì)自動(dòng)將ARFaceAnchor對(duì)象添加到正在運(yùn)行的AR Session中檩淋,其中包含有關(guān)用戶面部的信息蟀悦,包括其位置和方向日戈。

注意

ARKit檢測(cè)并提供關(guān)于一個(gè)用戶臉部的信息浙炼。 如果相機(jī)圖像中存在多個(gè)人臉唯袄,ARKit會(huì)選擇最大或最清晰可識(shí)別的人臉恋拷。

在基于SceneKit的AR體驗(yàn)中,可以在渲染器中添加與面部錨點(diǎn)對(duì)應(yīng)的3D內(nèi)容:didAddNode:forAnchor:method(來(lái)自ARSCNViewDelegate協(xié)議)酌住。 ARKit為定位點(diǎn)添加一個(gè)SceneKit節(jié)點(diǎn),并更新該節(jié)點(diǎn)在每個(gè)框架上的位置和方向消痛,因此添加到該節(jié)點(diǎn)的任何SceneKit內(nèi)容都會(huì)自動(dòng)跟隨用戶臉部的位置和方向秩伞。

In this example, the?renderer:didAddNode:forAnchor:?method calls the?setupFaceNodeContent?method to add SceneKit content to the?faceNode. For example, if you change the?showsCoordinateOrigin?variable in the sample code, the app adds a visualization of the x/y/z axes to the node, indicating the origin of the face anchor’s coordinate system.

在此示例中纱新,渲染器:didAddNode:forAnchor:方法調(diào)用setupFaceNodeContent方法以將SceneKit內(nèi)容添加到faceNode脸爱。 例如簿废,如果您在示例代碼中更改showsCoordinateOrigin變量络它,則該應(yīng)用程序?qū) / y / z軸的可視化文件添加到節(jié)點(diǎn)化戳,以指示面部錨點(diǎn)坐標(biāo)系的原點(diǎn)。

Use Face Geometry to Model the User’s Face

ARKit provides a coarse 3D mesh geometry matching the size, shape, topology, and current facial expression of the user’s face. ARKit also provides the?ARSCNFaceGeometry?class, offering an easy way to visualize this mesh in SceneKit.

Your AR experience can use this mesh to place or draw content that appears to attach to the face. For example, by applying a semitransparent texture to this geometry you could paint virtual tattoos or makeup onto the user’s skin.

To create a SceneKit face geometry, initialize an?ARSCNFaceGeometry?object with the Metal device your SceneKit view uses for rendering:

使用面幾何來(lái)建模用戶的面部

ARKit提供與用戶臉部的大小扫尖,形狀藏斩,拓?fù)浜彤?dāng)前面部表情相匹配的粗糙三維網(wǎng)格幾何圖形却盘。 ARKit還提供了ARSCNFaceGeometry類黄橘,提供了一種在SceneKit中可視化該網(wǎng)格的簡(jiǎn)單方法塞关。

您的AR體驗(yàn)可以使用此網(wǎng)格來(lái)放置或繪制看起來(lái)附著在臉上的內(nèi)容。 例如小压,通過(guò)對(duì)此幾何圖形應(yīng)用半透明紋理,您可以在用戶的皮膚上繪制虛擬紋身或化妝仪搔。

要?jiǎng)?chuàng)建SceneKit面幾何蜻牢,請(qǐng)使用SceneKit視圖用于渲染的Metal設(shè)備初始化ARSCNFaceGeometry對(duì)象:

The sample code’s?setupFaceNodeContent?method (mentioned above) adds a node containing the face geometry to the scene. By making that node a child of the node provided by the face anchor, the face model automatically tracks the position and orientation of the user’s face.

To also make the face model onscreen conform to the shape of the user’s face, even as the user blinks, talks, and makes various facial expressions, you need to retrieve an updated face mesh in the?renderer:didUpdateNode:forAnchor:?delegate callback.

示例代碼的setupFaceNodeContent方法(如上所述)將包含面幾何的節(jié)點(diǎn)添加到場(chǎng)景中煮嫌。 通過(guò)使該節(jié)點(diǎn)成為面部錨點(diǎn)提供的節(jié)點(diǎn)的子節(jié)點(diǎn)昌阿,面部模型自動(dòng)跟蹤用戶面部的位置和方向恳邀。

為了使屏幕上的臉部模型符合用戶臉部的形狀,即使用戶在閃爍,說(shuō)話并制作各種臉部表情時(shí)鳄抒,也需要在呈現(xiàn)器中檢索更新后的臉部網(wǎng)格:didUpdateNode:forAnchor:delegate回調(diào)椰弊。


Then, update the?ARSCNFaceGeometry?object in your scene to match by passing the new face mesh to its?updateFromFaceGeometry:?method:

然后秉版,通過(guò)將新面部網(wǎng)格傳遞給其updateFromFaceGeometry:方法來(lái)更新場(chǎng)景中的ARSCNFaceGeometry對(duì)象以進(jìn)行匹配:


Place 3D Content on the User’s Face

Another use of the face mesh that ARKit provides is to create?occlusion geometry?in your scene. An occlusion geometry is a 3D model that doesn’t render any visible content (allowing the camera image to show through), but obstructs the camera’s view of other virtual content in the scene.

This technique creates the illusion that the real face interacts with virtual objects, even though the face is a 2D camera image and the virtual content is a rendered 3D object. For example, if you place an occlusion geometry and virtual glasses on the user’s face, the face can obscure the frame of the glasses.

To create an occlusion geometry for the face, start by creating an?ARSCNFaceGeometryobject as in the previous example. However, instead of configuring that object’s SceneKit material with a visible appearance, set the material to render depth but not color during rendering:

將3D內(nèi)容放置在用戶的臉上

ARKit提供的臉部網(wǎng)格的另一個(gè)用途是在場(chǎng)景中創(chuàng)建遮擋幾何體并蝗。 遮擋幾何是一種3D模型秸妥,它不會(huì)呈現(xiàn)任何可見(jiàn)內(nèi)容(允許攝像機(jī)圖像顯示)粥惧,但會(huì)阻擋相機(jī)查看場(chǎng)景中其他虛擬內(nèi)容的視圖。

即使臉部是2D照相機(jī)圖像起惕,而虛擬內(nèi)容是渲染的3D對(duì)象,該技術(shù)也會(huì)創(chuàng)建真實(shí)臉部與虛擬對(duì)象交互的錯(cuò)覺(jué)问词。 例如勺馆,如果您在用戶的臉上放置遮擋幾何圖形和虛擬眼鏡草穆,則臉部可能會(huì)遮擋眼鏡的框架悲柱。

要為面部創(chuàng)建遮擋幾何圖形,請(qǐng)先按照前面的示例創(chuàng)建ARSCNFaceGeometry對(duì)象嘿般。 但是涯冠,不要使用可見(jiàn)外觀配置該對(duì)象的SceneKit材質(zhì)蛇更,而在渲染過(guò)程中應(yīng)將材質(zhì)設(shè)置為渲染深度而不是顏色:


Because the material renders depth, other objects rendered by SceneKit correctly appear in front of it or behind it. But because the material doesn’t render color, the camera image appears in its place. The sample app combines this technique with a SceneKit object positioned in front of the user’s eyes, creating an effect where the object is realistically obscured by the user’s nose.

由于材質(zhì)渲染深度派任,SceneKit渲染的其他對(duì)象正確顯示在其前面或后面。 但由于材質(zhì)不呈現(xiàn)顏色师逸,相機(jī)圖像顯示在其位置上豆混。 示例應(yīng)用程序?qū)⒋思夹g(shù)與位于用戶眼睛前方的SceneKit對(duì)象結(jié)合在一起,從而創(chuàng)建一種效果遗淳,使用者的鼻子逼真地遮擋物體心傀。

Animate a Character with Blend Shapes

In addition to the face mesh shown in the above examples, ARKit also provides a more abstract model of the user’s facial expressions in the form of a?blendShapes?dictionary. You can use the named coefficient values in this dictionary to control the animation parameters of your own 2D or 3D assets, creating a character (such as an avatar or puppet) that follows the user’s real facial movements and expressions.

As a basic demonstration of blend shape animation, this sample includes a simple model of a robot character’s head, created using SceneKit primitive shapes. (See the?robotHead.scn?file in the source code.)

To get the user’s current facial expression, read the?blendShapes?dictionary from the face anchor in the?renderer:didUpdateNode:forAnchor:?delegate callback:

使用混合形狀動(dòng)畫(huà)人物

除了上面示例中顯示的人臉網(wǎng)格外养叛,ARKit還以blendShapes字典的形式提供了用戶面部表情的更抽象模型。 您可以使用此字典中的命名系數(shù)值來(lái)控制您自己的2D或3D資源的動(dòng)畫(huà)參數(shù)弃甥,創(chuàng)建一個(gè)跟隨用戶真實(shí)面部動(dòng)作和表情的角色(例如化身或木偶)淆攻。

作為混合形狀動(dòng)畫(huà)的基本演示,此示例包含一個(gè)機(jī)器人角色頭部的簡(jiǎn)單模型啸箫,使用SceneKit原始形狀創(chuàng)建忘苛。 (請(qǐng)參閱源代碼中的robotHead.scn文件唱较。)

要獲取用戶的當(dāng)前面部表情南缓,請(qǐng)從渲染器中的face anchor中讀取blendShapes字典:didUpdateNode:forAnchor:delegate callback:


Then, examine the key-value pairs in that dictionary to calculate animation parameters for your model. There are 52 unique?ARBlendShapeLocation?coefficients. Your app can use as few or as many of them as neccessary to create the artistic effect you want. In this sample, the?RobotHead?class performs this calculation, mapping the?ARBlendShapeLocationEyeBlinkLeft?and?ARBlendShapeLocationEyeBlinkRight?parameters to one axis of the?scale?factor of the robot’s eyes, and the?ARBlendShapeLocationJawOpen?parameter to offset the position of the robot’s jaw.

然后狐榔,檢查該字典中的鍵值對(duì)获雕,以計(jì)算模型的動(dòng)畫(huà)參數(shù)届案。 有52個(gè)獨(dú)特的ARBlendShapeLocation系數(shù)楣颠。 您的應(yīng)用程序可以使用盡可能少的或盡可能多的必要條件來(lái)創(chuàng)建您想要的藝術(shù)效果咐蚯。 在此示例中春锋,RobotHead類執(zhí)行此計(jì)算,將ARBlendShapeLocationEyeBlinkLeft和ARBlendShapeLocationEyeBlinkRight參數(shù)映射到機(jī)器人眼睛比例因子的一個(gè)軸侧馅,并使用ARBlendShapeLocationJawOpen參數(shù)來(lái)偏移機(jī)器人頜部的位置馁痴。


?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末罗晕,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子法褥,更是在濱河造成了極大的恐慌挖胃,老刑警劉巖酱鸭,帶你破解...
    沈念sama閱讀 206,214評(píng)論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件凹髓,死亡現(xiàn)場(chǎng)離奇詭異蔚舀,居然都是意外死亡赌躺,警方通過(guò)查閱死者的電腦和手機(jī)礼患,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,307評(píng)論 2 382
  • 文/潘曉璐 我一進(jìn)店門缅叠,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)虏冻,“玉大人厨相,你說(shuō)我怎么就攤上這事∶跎” “怎么了?”我有些...
    開(kāi)封第一講書(shū)人閱讀 152,543評(píng)論 0 341
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)幻碱。 經(jīng)常有香客問(wèn)我,道長(zhǎng)褥傍,這世上最難降的妖魔是什么儡嘶? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 55,221評(píng)論 1 279
  • 正文 為了忘掉前任,我火速辦了婚禮恍风,結(jié)果婚禮上蹦狂,老公的妹妹穿的比我還像新娘。我一直安慰自己朋贬,他們只是感情好凯楔,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,224評(píng)論 5 371
  • 文/花漫 我一把揭開(kāi)白布。 她就那樣靜靜地躺著锦募,像睡著了一般摆屯。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上虐骑,一...
    開(kāi)封第一講書(shū)人閱讀 49,007評(píng)論 1 284
  • 那天,我揣著相機(jī)與錄音腕柜,去河邊找鬼砰蠢。 笑死台舱,一個(gè)胖子當(dāng)著我的面吹牛柜去,可吹牛的內(nèi)容都是我干的讼撒。 我是一名探鬼主播根盒,決...
    沈念sama閱讀 38,313評(píng)論 3 399
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼诬乞,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼森瘪!你這毒婦竟也來(lái)了柜砾?” 一聲冷哼從身側(cè)響起,我...
    開(kāi)封第一講書(shū)人閱讀 36,956評(píng)論 0 259
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎蝇完,沒(méi)想到半個(gè)月后,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 43,441評(píng)論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡警检,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 35,925評(píng)論 2 323
  • 正文 我和宋清朗相戀三年折剃,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片吱瘩。...
    茶點(diǎn)故事閱讀 38,018評(píng)論 1 333
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡祝懂,死狀恐怖矢门,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情物延,我是刑警寧澤,帶...
    沈念sama閱讀 33,685評(píng)論 4 322
  • 正文 年R本政府宣布,位于F島的核電站抖拴,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏家夺。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,234評(píng)論 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望蔓腐。 院中可真熱鬧,春花似錦傀蓉、人聲如沸缚甩。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 30,240評(píng)論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)庆猫。三九已至嘁字,卻和暖如春衷恭,著一層夾襖步出監(jiān)牢的瞬間随珠,已是汗流浹背窗看。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 31,464評(píng)論 1 261
  • 我被黑心中介騙來(lái)泰國(guó)打工构罗, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人盖彭。 一個(gè)月前我還...
    沈念sama閱讀 45,467評(píng)論 2 352
  • 正文 我出身青樓隧熙,卻偏偏與公主長(zhǎng)得像贞盯,于是被迫代替她去往敵國(guó)和親整葡。 傳聞我的和親對(duì)象是個(gè)殘疾皇子旬渠,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,762評(píng)論 2 345

推薦閱讀更多精彩內(nèi)容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi閱讀 7,292評(píng)論 0 10
  • 世界這么大,你想去看什么成翩? 這句話一經(jīng)問(wèn)出便會(huì)喚起人們?nèi)缌魈K遮掩般隱約可見(jiàn)的憧憬掂摔。有的人說(shuō),想一身輕松地重回故里寥殖,...
    穹靈閱讀 190評(píng)論 0 0
  • 第一眼看到2017年《環(huán)球人物》第10期封面那個(gè)年輕英俊的徐志摩像,便準(zhǔn)備將《徐志摩叮盘,被隱去的另一面》拿來(lái)一讀吭服。 ...
    王根云閱讀 258評(píng)論 0 1