It showed someone participating in a group video call. But it only showed what the other people (who were not wearing headsets) look like. What does the Vision Pro user look like? Does it just use animoji or something?
EDIT: sounds like they render a realistic animated image of you, if I caught that correctly?
I'm suspecting they'll ask users to enroll/login with camera for IPD correction, then make a figurative Live2D model using that picture, interior camera and lip sync.
Edit: and yep, your head is modeled and a doppelganger of you will show in calls.
I wonder how this will work without the ability to capture full facial expressions. Can they extrapolate from the features they can see, to fill in the gaps of what they can't see? Or will tone of voice be sufficient to let people know when you're happy or upset?
This could be the next generation of "tone gets lost in text".
From what they showed, it will be a reallistic-looking 3D avatar of yourself. It's quite into uncanny valley territory but it will likely improve over time. What they demoed looks a bit like a blurry deepfake.
EDIT: sounds like they render a realistic animated image of you, if I caught that correctly?