Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They're talking about this now. Sounds basically like this, except 3d. They also have lips (using downwards facing cameras I guess).


I wonder how this will work without the ability to capture full facial expressions. Can they extrapolate from the features they can see, to fill in the gaps of what they can't see? Or will tone of voice be sufficient to let people know when you're happy or upset?

This could be the next generation of "tone gets lost in text".


From what they showed, it will be a reallistic-looking 3D avatar of yourself. It's quite into uncanny valley territory but it will likely improve over time. What they demoed looks a bit like a blurry deepfake.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: