iOS11 ARKit: Can ARKit also capture the Texture of the user's face?
I read the whole documentation on all ARKit classes up and down. I don't see any place that describes ability to actually get the user face's Texture.
ARFaceAnchor contains the ARFaceGeometry (topology and geometry comprised of vertices) and the BlendShapeLocation array (coordinates allowing manipulations of individual facial traits by manipulating geometric math on the user face's vertices).
But where can I get the actual Texture of the user's face. For example: the actual skin tone / color / texture, facial hair, other unique traits, such as scars or birth marks? Or is this not possible at all?
ios ios11 arkit iphone-x
add a comment |
I read the whole documentation on all ARKit classes up and down. I don't see any place that describes ability to actually get the user face's Texture.
ARFaceAnchor contains the ARFaceGeometry (topology and geometry comprised of vertices) and the BlendShapeLocation array (coordinates allowing manipulations of individual facial traits by manipulating geometric math on the user face's vertices).
But where can I get the actual Texture of the user's face. For example: the actual skin tone / color / texture, facial hair, other unique traits, such as scars or birth marks? Or is this not possible at all?
ios ios11 arkit iphone-x
add a comment |
I read the whole documentation on all ARKit classes up and down. I don't see any place that describes ability to actually get the user face's Texture.
ARFaceAnchor contains the ARFaceGeometry (topology and geometry comprised of vertices) and the BlendShapeLocation array (coordinates allowing manipulations of individual facial traits by manipulating geometric math on the user face's vertices).
But where can I get the actual Texture of the user's face. For example: the actual skin tone / color / texture, facial hair, other unique traits, such as scars or birth marks? Or is this not possible at all?
ios ios11 arkit iphone-x
I read the whole documentation on all ARKit classes up and down. I don't see any place that describes ability to actually get the user face's Texture.
ARFaceAnchor contains the ARFaceGeometry (topology and geometry comprised of vertices) and the BlendShapeLocation array (coordinates allowing manipulations of individual facial traits by manipulating geometric math on the user face's vertices).
But where can I get the actual Texture of the user's face. For example: the actual skin tone / color / texture, facial hair, other unique traits, such as scars or birth marks? Or is this not possible at all?
ios ios11 arkit iphone-x
ios ios11 arkit iphone-x
asked Nov 10 '17 at 14:43
FranticRock
1,4451637
1,4451637
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
You want a texture-map-style image for the face? There’s no API that gets you exactly that, but all the information you need is there:
ARFrame.capturedImage
gets you the camera image.ARFaceGeometry
gets you a 3D mesh of the face.ARAnchor
andARCamera
together tell you where the face is in relation to the camera, and how the camera relates to the image pixels.
So it’s entirely possible to texture the face model using the current video frame image. For each vertex in the mesh...
- Convert the vertex position from model space to camera space (use the anchor’s transform)
- Multiply with the camera projection with that vector to get to normalized image coordinates
- Divide by image width/height to get pixel coordinates
This gets you texture coordinates for each vertex, which you can then use to texture the mesh using the camera image. You could do this math either all at once to replace the texture coordinate buffer ARFaceGeometry
provides, or do it in shader code on the GPU during rendering. (If you’re rendering using SceneKit / ARSCNView
you can probably do this in a shader modifier for the geometry
entry point.)
If instead you want to know for each pixel in the camera image what part of the face geometry it corresponds to, it’s a bit harder. You can’t just reverse the above math because you’re missing a depth value for each pixel... but if you don’t need to map every pixel, SceneKit hit testing is an easy way to get geometry for individual pixels.
If what you’re actually asking for is landmark recognition — e.g. where in the camera image are the eyes, nose, beard, etc — there’s no API in ARKit for that. The Vision framework might help.
Thank you very much for the great answer. I'm following the approach of manipulating the texture coordinate buffer from ARFaceGeometry, and it's looking promising.
– FranticRock
Nov 14 '17 at 21:50
@rickster Can I assume (with the "all at once" technique above) that rather than attempting to manipulate the camera image to fit the texture coordinates, one should leave the camera image alone, and manipulate the texture coords in the .obj file instead ?
– coco
Nov 30 '17 at 15:17
@coco What obj file?ARFaceGeometry
provides a new face mesh, with vertex positions updated to match the current pose/expression of the face, on every frame. So “all at once” is “all at once per frame”; that is, each time you get a new anchor with updated geometry, you run through its vertex buffer and generate a new texture coordinates buffer mapping each vertex to the point in the video image currently “behind” that vertex.
– rickster
Nov 30 '17 at 18:18
In neither of my suggested approaches do you manipulate the image — it’s all about manipulating texture coordinates (using vertex position data) so that your texture sample into the image gets you pixels matching where the face mesh currently is. “All at once” means processing the whole vertex buffer (likely on CPU); the alternative is to do it on the GPU during render time, since messing with vertex attributes (like position/texcoord) is exactly what vertex shaders are for.
– rickster
Nov 30 '17 at 18:22
Thank you @rickster. In my exploration of this, I'm first working on a single frame, which is why I'm exporting the data as an .obj file, to more easily view the result.
– coco
Nov 30 '17 at 19:44
|
show 2 more comments
No. That information is not currently available in ARKit
.
To detect other facial features, you'll need to run your own custom computer vision code. You can capture images from the front-facing camera using AVFoundation
.
add a comment |
You can calculate the texture coordinates as follows:
let geometry = faceAnchor.geometry
let vertices = geometry.vertices
let size = arFrame.camera.imageResolution
let camera = arFrame.camera
modelMatrix = faceAnchor.transform
let textureCoordinates = vertices.map vertex -> vector_float2 in
let vertex4 = vector_float4(vertex.x, vertex.y, vertex.z, 1)
let world_vertex4 = simd_mul(modelMatrix!, vertex4)
let world_vector3 = simd_float3(x: world_vertex4.x, y: world_vertex4.y, z: world_vertex4.z)
let pt = camera.projectPoint(world_vector3,
orientation: .portrait,
viewportSize: CGSize(
width: CGFloat(size.height),
height: CGFloat(size.width)))
let v = 1.0 - Float(pt.x) / Float(size.height)
let u = Float(pt.y) / Float(size.width)
return vector_float2(u, v)
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f47225280%2fios11-arkit-can-arkit-also-capture-the-texture-of-the-users-face%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
You want a texture-map-style image for the face? There’s no API that gets you exactly that, but all the information you need is there:
ARFrame.capturedImage
gets you the camera image.ARFaceGeometry
gets you a 3D mesh of the face.ARAnchor
andARCamera
together tell you where the face is in relation to the camera, and how the camera relates to the image pixels.
So it’s entirely possible to texture the face model using the current video frame image. For each vertex in the mesh...
- Convert the vertex position from model space to camera space (use the anchor’s transform)
- Multiply with the camera projection with that vector to get to normalized image coordinates
- Divide by image width/height to get pixel coordinates
This gets you texture coordinates for each vertex, which you can then use to texture the mesh using the camera image. You could do this math either all at once to replace the texture coordinate buffer ARFaceGeometry
provides, or do it in shader code on the GPU during rendering. (If you’re rendering using SceneKit / ARSCNView
you can probably do this in a shader modifier for the geometry
entry point.)
If instead you want to know for each pixel in the camera image what part of the face geometry it corresponds to, it’s a bit harder. You can’t just reverse the above math because you’re missing a depth value for each pixel... but if you don’t need to map every pixel, SceneKit hit testing is an easy way to get geometry for individual pixels.
If what you’re actually asking for is landmark recognition — e.g. where in the camera image are the eyes, nose, beard, etc — there’s no API in ARKit for that. The Vision framework might help.
Thank you very much for the great answer. I'm following the approach of manipulating the texture coordinate buffer from ARFaceGeometry, and it's looking promising.
– FranticRock
Nov 14 '17 at 21:50
@rickster Can I assume (with the "all at once" technique above) that rather than attempting to manipulate the camera image to fit the texture coordinates, one should leave the camera image alone, and manipulate the texture coords in the .obj file instead ?
– coco
Nov 30 '17 at 15:17
@coco What obj file?ARFaceGeometry
provides a new face mesh, with vertex positions updated to match the current pose/expression of the face, on every frame. So “all at once” is “all at once per frame”; that is, each time you get a new anchor with updated geometry, you run through its vertex buffer and generate a new texture coordinates buffer mapping each vertex to the point in the video image currently “behind” that vertex.
– rickster
Nov 30 '17 at 18:18
In neither of my suggested approaches do you manipulate the image — it’s all about manipulating texture coordinates (using vertex position data) so that your texture sample into the image gets you pixels matching where the face mesh currently is. “All at once” means processing the whole vertex buffer (likely on CPU); the alternative is to do it on the GPU during render time, since messing with vertex attributes (like position/texcoord) is exactly what vertex shaders are for.
– rickster
Nov 30 '17 at 18:22
Thank you @rickster. In my exploration of this, I'm first working on a single frame, which is why I'm exporting the data as an .obj file, to more easily view the result.
– coco
Nov 30 '17 at 19:44
|
show 2 more comments
You want a texture-map-style image for the face? There’s no API that gets you exactly that, but all the information you need is there:
ARFrame.capturedImage
gets you the camera image.ARFaceGeometry
gets you a 3D mesh of the face.ARAnchor
andARCamera
together tell you where the face is in relation to the camera, and how the camera relates to the image pixels.
So it’s entirely possible to texture the face model using the current video frame image. For each vertex in the mesh...
- Convert the vertex position from model space to camera space (use the anchor’s transform)
- Multiply with the camera projection with that vector to get to normalized image coordinates
- Divide by image width/height to get pixel coordinates
This gets you texture coordinates for each vertex, which you can then use to texture the mesh using the camera image. You could do this math either all at once to replace the texture coordinate buffer ARFaceGeometry
provides, or do it in shader code on the GPU during rendering. (If you’re rendering using SceneKit / ARSCNView
you can probably do this in a shader modifier for the geometry
entry point.)
If instead you want to know for each pixel in the camera image what part of the face geometry it corresponds to, it’s a bit harder. You can’t just reverse the above math because you’re missing a depth value for each pixel... but if you don’t need to map every pixel, SceneKit hit testing is an easy way to get geometry for individual pixels.
If what you’re actually asking for is landmark recognition — e.g. where in the camera image are the eyes, nose, beard, etc — there’s no API in ARKit for that. The Vision framework might help.
Thank you very much for the great answer. I'm following the approach of manipulating the texture coordinate buffer from ARFaceGeometry, and it's looking promising.
– FranticRock
Nov 14 '17 at 21:50
@rickster Can I assume (with the "all at once" technique above) that rather than attempting to manipulate the camera image to fit the texture coordinates, one should leave the camera image alone, and manipulate the texture coords in the .obj file instead ?
– coco
Nov 30 '17 at 15:17
@coco What obj file?ARFaceGeometry
provides a new face mesh, with vertex positions updated to match the current pose/expression of the face, on every frame. So “all at once” is “all at once per frame”; that is, each time you get a new anchor with updated geometry, you run through its vertex buffer and generate a new texture coordinates buffer mapping each vertex to the point in the video image currently “behind” that vertex.
– rickster
Nov 30 '17 at 18:18
In neither of my suggested approaches do you manipulate the image — it’s all about manipulating texture coordinates (using vertex position data) so that your texture sample into the image gets you pixels matching where the face mesh currently is. “All at once” means processing the whole vertex buffer (likely on CPU); the alternative is to do it on the GPU during render time, since messing with vertex attributes (like position/texcoord) is exactly what vertex shaders are for.
– rickster
Nov 30 '17 at 18:22
Thank you @rickster. In my exploration of this, I'm first working on a single frame, which is why I'm exporting the data as an .obj file, to more easily view the result.
– coco
Nov 30 '17 at 19:44
|
show 2 more comments
You want a texture-map-style image for the face? There’s no API that gets you exactly that, but all the information you need is there:
ARFrame.capturedImage
gets you the camera image.ARFaceGeometry
gets you a 3D mesh of the face.ARAnchor
andARCamera
together tell you where the face is in relation to the camera, and how the camera relates to the image pixels.
So it’s entirely possible to texture the face model using the current video frame image. For each vertex in the mesh...
- Convert the vertex position from model space to camera space (use the anchor’s transform)
- Multiply with the camera projection with that vector to get to normalized image coordinates
- Divide by image width/height to get pixel coordinates
This gets you texture coordinates for each vertex, which you can then use to texture the mesh using the camera image. You could do this math either all at once to replace the texture coordinate buffer ARFaceGeometry
provides, or do it in shader code on the GPU during rendering. (If you’re rendering using SceneKit / ARSCNView
you can probably do this in a shader modifier for the geometry
entry point.)
If instead you want to know for each pixel in the camera image what part of the face geometry it corresponds to, it’s a bit harder. You can’t just reverse the above math because you’re missing a depth value for each pixel... but if you don’t need to map every pixel, SceneKit hit testing is an easy way to get geometry for individual pixels.
If what you’re actually asking for is landmark recognition — e.g. where in the camera image are the eyes, nose, beard, etc — there’s no API in ARKit for that. The Vision framework might help.
You want a texture-map-style image for the face? There’s no API that gets you exactly that, but all the information you need is there:
ARFrame.capturedImage
gets you the camera image.ARFaceGeometry
gets you a 3D mesh of the face.ARAnchor
andARCamera
together tell you where the face is in relation to the camera, and how the camera relates to the image pixels.
So it’s entirely possible to texture the face model using the current video frame image. For each vertex in the mesh...
- Convert the vertex position from model space to camera space (use the anchor’s transform)
- Multiply with the camera projection with that vector to get to normalized image coordinates
- Divide by image width/height to get pixel coordinates
This gets you texture coordinates for each vertex, which you can then use to texture the mesh using the camera image. You could do this math either all at once to replace the texture coordinate buffer ARFaceGeometry
provides, or do it in shader code on the GPU during rendering. (If you’re rendering using SceneKit / ARSCNView
you can probably do this in a shader modifier for the geometry
entry point.)
If instead you want to know for each pixel in the camera image what part of the face geometry it corresponds to, it’s a bit harder. You can’t just reverse the above math because you’re missing a depth value for each pixel... but if you don’t need to map every pixel, SceneKit hit testing is an easy way to get geometry for individual pixels.
If what you’re actually asking for is landmark recognition — e.g. where in the camera image are the eyes, nose, beard, etc — there’s no API in ARKit for that. The Vision framework might help.
answered Nov 11 '17 at 0:53
rickster
101k21203258
101k21203258
Thank you very much for the great answer. I'm following the approach of manipulating the texture coordinate buffer from ARFaceGeometry, and it's looking promising.
– FranticRock
Nov 14 '17 at 21:50
@rickster Can I assume (with the "all at once" technique above) that rather than attempting to manipulate the camera image to fit the texture coordinates, one should leave the camera image alone, and manipulate the texture coords in the .obj file instead ?
– coco
Nov 30 '17 at 15:17
@coco What obj file?ARFaceGeometry
provides a new face mesh, with vertex positions updated to match the current pose/expression of the face, on every frame. So “all at once” is “all at once per frame”; that is, each time you get a new anchor with updated geometry, you run through its vertex buffer and generate a new texture coordinates buffer mapping each vertex to the point in the video image currently “behind” that vertex.
– rickster
Nov 30 '17 at 18:18
In neither of my suggested approaches do you manipulate the image — it’s all about manipulating texture coordinates (using vertex position data) so that your texture sample into the image gets you pixels matching where the face mesh currently is. “All at once” means processing the whole vertex buffer (likely on CPU); the alternative is to do it on the GPU during render time, since messing with vertex attributes (like position/texcoord) is exactly what vertex shaders are for.
– rickster
Nov 30 '17 at 18:22
Thank you @rickster. In my exploration of this, I'm first working on a single frame, which is why I'm exporting the data as an .obj file, to more easily view the result.
– coco
Nov 30 '17 at 19:44
|
show 2 more comments
Thank you very much for the great answer. I'm following the approach of manipulating the texture coordinate buffer from ARFaceGeometry, and it's looking promising.
– FranticRock
Nov 14 '17 at 21:50
@rickster Can I assume (with the "all at once" technique above) that rather than attempting to manipulate the camera image to fit the texture coordinates, one should leave the camera image alone, and manipulate the texture coords in the .obj file instead ?
– coco
Nov 30 '17 at 15:17
@coco What obj file?ARFaceGeometry
provides a new face mesh, with vertex positions updated to match the current pose/expression of the face, on every frame. So “all at once” is “all at once per frame”; that is, each time you get a new anchor with updated geometry, you run through its vertex buffer and generate a new texture coordinates buffer mapping each vertex to the point in the video image currently “behind” that vertex.
– rickster
Nov 30 '17 at 18:18
In neither of my suggested approaches do you manipulate the image — it’s all about manipulating texture coordinates (using vertex position data) so that your texture sample into the image gets you pixels matching where the face mesh currently is. “All at once” means processing the whole vertex buffer (likely on CPU); the alternative is to do it on the GPU during render time, since messing with vertex attributes (like position/texcoord) is exactly what vertex shaders are for.
– rickster
Nov 30 '17 at 18:22
Thank you @rickster. In my exploration of this, I'm first working on a single frame, which is why I'm exporting the data as an .obj file, to more easily view the result.
– coco
Nov 30 '17 at 19:44
Thank you very much for the great answer. I'm following the approach of manipulating the texture coordinate buffer from ARFaceGeometry, and it's looking promising.
– FranticRock
Nov 14 '17 at 21:50
Thank you very much for the great answer. I'm following the approach of manipulating the texture coordinate buffer from ARFaceGeometry, and it's looking promising.
– FranticRock
Nov 14 '17 at 21:50
@rickster Can I assume (with the "all at once" technique above) that rather than attempting to manipulate the camera image to fit the texture coordinates, one should leave the camera image alone, and manipulate the texture coords in the .obj file instead ?
– coco
Nov 30 '17 at 15:17
@rickster Can I assume (with the "all at once" technique above) that rather than attempting to manipulate the camera image to fit the texture coordinates, one should leave the camera image alone, and manipulate the texture coords in the .obj file instead ?
– coco
Nov 30 '17 at 15:17
@coco What obj file?
ARFaceGeometry
provides a new face mesh, with vertex positions updated to match the current pose/expression of the face, on every frame. So “all at once” is “all at once per frame”; that is, each time you get a new anchor with updated geometry, you run through its vertex buffer and generate a new texture coordinates buffer mapping each vertex to the point in the video image currently “behind” that vertex.– rickster
Nov 30 '17 at 18:18
@coco What obj file?
ARFaceGeometry
provides a new face mesh, with vertex positions updated to match the current pose/expression of the face, on every frame. So “all at once” is “all at once per frame”; that is, each time you get a new anchor with updated geometry, you run through its vertex buffer and generate a new texture coordinates buffer mapping each vertex to the point in the video image currently “behind” that vertex.– rickster
Nov 30 '17 at 18:18
In neither of my suggested approaches do you manipulate the image — it’s all about manipulating texture coordinates (using vertex position data) so that your texture sample into the image gets you pixels matching where the face mesh currently is. “All at once” means processing the whole vertex buffer (likely on CPU); the alternative is to do it on the GPU during render time, since messing with vertex attributes (like position/texcoord) is exactly what vertex shaders are for.
– rickster
Nov 30 '17 at 18:22
In neither of my suggested approaches do you manipulate the image — it’s all about manipulating texture coordinates (using vertex position data) so that your texture sample into the image gets you pixels matching where the face mesh currently is. “All at once” means processing the whole vertex buffer (likely on CPU); the alternative is to do it on the GPU during render time, since messing with vertex attributes (like position/texcoord) is exactly what vertex shaders are for.
– rickster
Nov 30 '17 at 18:22
Thank you @rickster. In my exploration of this, I'm first working on a single frame, which is why I'm exporting the data as an .obj file, to more easily view the result.
– coco
Nov 30 '17 at 19:44
Thank you @rickster. In my exploration of this, I'm first working on a single frame, which is why I'm exporting the data as an .obj file, to more easily view the result.
– coco
Nov 30 '17 at 19:44
|
show 2 more comments
No. That information is not currently available in ARKit
.
To detect other facial features, you'll need to run your own custom computer vision code. You can capture images from the front-facing camera using AVFoundation
.
add a comment |
No. That information is not currently available in ARKit
.
To detect other facial features, you'll need to run your own custom computer vision code. You can capture images from the front-facing camera using AVFoundation
.
add a comment |
No. That information is not currently available in ARKit
.
To detect other facial features, you'll need to run your own custom computer vision code. You can capture images from the front-facing camera using AVFoundation
.
No. That information is not currently available in ARKit
.
To detect other facial features, you'll need to run your own custom computer vision code. You can capture images from the front-facing camera using AVFoundation
.
answered Nov 10 '17 at 17:08
nathangitter
6,68311832
6,68311832
add a comment |
add a comment |
You can calculate the texture coordinates as follows:
let geometry = faceAnchor.geometry
let vertices = geometry.vertices
let size = arFrame.camera.imageResolution
let camera = arFrame.camera
modelMatrix = faceAnchor.transform
let textureCoordinates = vertices.map vertex -> vector_float2 in
let vertex4 = vector_float4(vertex.x, vertex.y, vertex.z, 1)
let world_vertex4 = simd_mul(modelMatrix!, vertex4)
let world_vector3 = simd_float3(x: world_vertex4.x, y: world_vertex4.y, z: world_vertex4.z)
let pt = camera.projectPoint(world_vector3,
orientation: .portrait,
viewportSize: CGSize(
width: CGFloat(size.height),
height: CGFloat(size.width)))
let v = 1.0 - Float(pt.x) / Float(size.height)
let u = Float(pt.y) / Float(size.width)
return vector_float2(u, v)
add a comment |
You can calculate the texture coordinates as follows:
let geometry = faceAnchor.geometry
let vertices = geometry.vertices
let size = arFrame.camera.imageResolution
let camera = arFrame.camera
modelMatrix = faceAnchor.transform
let textureCoordinates = vertices.map vertex -> vector_float2 in
let vertex4 = vector_float4(vertex.x, vertex.y, vertex.z, 1)
let world_vertex4 = simd_mul(modelMatrix!, vertex4)
let world_vector3 = simd_float3(x: world_vertex4.x, y: world_vertex4.y, z: world_vertex4.z)
let pt = camera.projectPoint(world_vector3,
orientation: .portrait,
viewportSize: CGSize(
width: CGFloat(size.height),
height: CGFloat(size.width)))
let v = 1.0 - Float(pt.x) / Float(size.height)
let u = Float(pt.y) / Float(size.width)
return vector_float2(u, v)
add a comment |
You can calculate the texture coordinates as follows:
let geometry = faceAnchor.geometry
let vertices = geometry.vertices
let size = arFrame.camera.imageResolution
let camera = arFrame.camera
modelMatrix = faceAnchor.transform
let textureCoordinates = vertices.map vertex -> vector_float2 in
let vertex4 = vector_float4(vertex.x, vertex.y, vertex.z, 1)
let world_vertex4 = simd_mul(modelMatrix!, vertex4)
let world_vector3 = simd_float3(x: world_vertex4.x, y: world_vertex4.y, z: world_vertex4.z)
let pt = camera.projectPoint(world_vector3,
orientation: .portrait,
viewportSize: CGSize(
width: CGFloat(size.height),
height: CGFloat(size.width)))
let v = 1.0 - Float(pt.x) / Float(size.height)
let u = Float(pt.y) / Float(size.width)
return vector_float2(u, v)
You can calculate the texture coordinates as follows:
let geometry = faceAnchor.geometry
let vertices = geometry.vertices
let size = arFrame.camera.imageResolution
let camera = arFrame.camera
modelMatrix = faceAnchor.transform
let textureCoordinates = vertices.map vertex -> vector_float2 in
let vertex4 = vector_float4(vertex.x, vertex.y, vertex.z, 1)
let world_vertex4 = simd_mul(modelMatrix!, vertex4)
let world_vector3 = simd_float3(x: world_vertex4.x, y: world_vertex4.y, z: world_vertex4.z)
let pt = camera.projectPoint(world_vector3,
orientation: .portrait,
viewportSize: CGSize(
width: CGFloat(size.height),
height: CGFloat(size.width)))
let v = 1.0 - Float(pt.x) / Float(size.height)
let u = Float(pt.y) / Float(size.width)
return vector_float2(u, v)
edited Nov 12 '18 at 3:11
Stephen Rauch
28k153256
28k153256
answered Nov 12 '18 at 2:47
ansont
212
212
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f47225280%2fios11-arkit-can-arkit-also-capture-the-texture-of-the-users-face%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown