Blender facial animation tool. What else should it do?
58 points
2 days ago
| 5 comments
| github.com
| HN
phkahler
22 minutes ago
[-]
Would it be possible to integrate directly into Blender instead of an add on? If so I think Blender is GPLv2 and this is GPLv3. If merg8ng is something you see in the future you may want to change that.
reply
reactordev
6 hours ago
[-]
Yessssssss!!!!! However, we need something that doesn’t rely on iPhone. We need webcam. You can use your iPhone as a webcam. You can also use more powerful video devices as a webcam. I would love a DIY mudface map that’s a b/w displacement map so you can capture the wrinkles of the face and map that with blender trackers. Seriously though, this is a huge leap towards that future.
reply
dagmx
5 hours ago
[-]
This repo doesn’t provide any computer vision algorithms. It’s taking the values the phone is providing for facial activations.

You’re asking for a different project altogether.

reply
echelon
3 hours ago
[-]
Here you go:

https://3d.kalidoface.com/

100% Webcam based skeletal body and facial blendshape tracking. The models are from Google and are open source.

reply
syntaxing
5 hours ago
[-]
As other has said, it’s using the iOS facial detection API that uses the front true depth camera (aka, the camera used for FaceID)
reply
dymk
5 hours ago
[-]
Is it using structured light / lidar of the iPhone, or just the camera? I don’t know how the project works, but calling out iPhone specifically makes me think it’s using a hardware feature that isn’t in a generic webcam.
reply
dagmx
5 hours ago
[-]
It’s specifically using the ARKit facial tracking that gives you FACS blend shape values

https://developer.apple.com/documentation/ARKit/tracking-and...

This plugin to blender is basically just receiving those values from the OS API and applying it. It’s a fairly common integration and almost all alternatives depend on ARKit on an iPhone as a result rather than implementing any algorithms themselves.

Variations of this plugins functionality have existed since the introduction of the iPhone X in 2017.

reply
s1mplicissimus
5 hours ago
[-]
the face recognition trick (generating a 3d vertex mesh for the video) should also be doable with a homelab setup. i assume lidar would improve the signal a lot adding factually correct depth values though.
reply
s1mplicissimus
6 hours ago
[-]
I would love to play around with this, but I don't own an iphone :/ using a webcam + local model for detection as input would be awesome
reply
embedding-shape
6 hours ago
[-]
Being able to record and manage takes directly in Blender would be an awesome feature and first thing that pops into mind :)
reply
nmfisher
5 hours ago
[-]
I have an add-on that does the same thing as OP, with a free (open source) and paid version. The paid version lets you record direct to Blender.

https://nickfisher.gumroad.com/l/tvzndw

reply
DonHopkins
5 hours ago
[-]
There's a nice Blender extension called "FaceIt" that I used a few years ago for rigging and producing ARKit-compatible facial animations and characters. It worked quite well (for what it was designed), and I recommend it!

https://superhivemarket.com/products/faceit

>Faceit is a Blender Add-on that assists you in creating complex facial expressions for arbitrary 3D characters.

>An intuitive, semi-automatic and non-destructive workflow guides you through the creation of facial shape keys that are perfectly adapted to your 3D model's topology and morphology, whether it’s a photorealistic human model or a cartoonish character. You maintain full artistic control while saving a ton of time and energy.

https://faceit-doc.readthedocs.io/en/latest/FAQ/

This is a great explanation of how FaceIt works, facial animation, shape keys, face rigs, ARKit, etc:

This addon automates Facial Animation (FACEIT Tut 1)

https://www.youtube.com/watch?v=KQ32KRYq6RA&list=PLdcL5aF8Zc...

reply