1 Minute in the Metaverse 🌐 Breathing Life Into Digital Humans
I’d like to start by talking about how you began working with avatars, and what’s your mission today?
Avatars have been around for such a long time, actually. It’s only recently that we’ve started calling them that. I refer to them as “characters” because of the role they play in games and movies, which was my field, to begin with. For quite some time now there have been pipelines focused specifically on making avatars in these industries. And today I’m seeing a new trend, where you have companies like Facebook and Apple hiring people like us. They used to be really focusing on making movies and games — entertainment generally — but now it’s shifting more towards tech. I see a lot of character artists adapt to this shift.
As for me, I’ve been working in this field for 20 years now, and I founded Reblika two years ago.
The key problem we’re trying to solve with my team is the uncanny valley — that is, the emotional response that occurs when we encounter a virtual human that doesn’t quite behave or look right. It gives us this uneasy feeling, since it’s unnatural, a bit creepy looking.
So, your goal is to make digital humans so realistic they become indistinguishable from humans, correct?
You can put it like that, yes. It’s a tough problem to solve though. A “perfect” digital human can be broken down into four elements: the face, the voice, the brain, and the motion. The brain is essentially the AI chatbot, controlling what digital humans will say. The face is what we’re working on — making the best-looking, realistic face. The voice must sound natural too, which is not that hard to achieve technologically these days. Then of course you have the motion: how, how do we move when we talk? What kind of expressions and nonverbal communication do we exhibit when we talk?
We’re working on the appearance. To nail it, you must really know what a human looks like. So, studying faces has become a part of my daily life, basically. What makes digital humans fake is actually the skin — it looks a bit plastic, and like a very thin, oily film that’s not realistic because it doesn’t have the surface fidelity of a real human, which is kind of bumpy and irregular. When you zoom into it with a microscope, you can see “mountains” and “craters” — not really appealing (laughs).
That’s what we miss in the visual representation of virtual humans, and even when we get it “right” on our end, once the avatar starts to talk and move, it still looks like a robot. So, the uncanny valley is definitely not something we can solve all by ourselves.
I see. And what makes digital humans so attractive to brands already, despite current limitations?
Brands want a piece of the metaverse pie. They’re interested in having their ambassador be a 3D character — and we’re building that for them. It’s the new stage of influencer marketing. A real influencer can get sick and cause scandals that are damaging to the reputation — while with virtual influences brands get full control. Miquela has become a golden standard for that.
In this world it is not really about how someone looks, it’s about the story behind them.
So even though I can create the most beautiful, realistic, digital human, if there’s no narrative for the character, it’s kind of useless — no one will watch it.
And if we go back to the more “gamified” virtual environments, which is the more traditional ‘habitat’ for avatars, what are the key technological challenges you observe?
These days you hear a lot of companies, like Nvidia and Microsoft, announcing their 3D avatar creation systems, but there’s not really a unified system, which would allow for interoperability where these avatars can be moved across the metaverse. For now, everyone is mostly working in their own closed ecosystem, while when we talk about the metaverse, it’s implied that it should be open. We gotta break that open, to then exchange formats and models freely. IP rights are a huge factor of course currently stopping us.
Right now there’s like a clear split: one the one side, you have the hyper realistic avatars like ours, on the other, the more stylized version like that of Genies or Tafi. I hope we can unify them so that people don’t have to choose just one.
When you go into the metaverse, you need a representation of yourself. You might want it to look identical to what you are today, or to become someone else. In the metaverse, it will be crucial to have a unified system that fully reflects the diversity of humans in the world.
What made you specialize in hyper-realistic avatars specifically? Do you find they are more ‘relatable’ for people?
I was intrigued by the complexity — technically, they are very hard to achieve. The second reason is the uncanny valley.
I believe that once we cross the uncanny valley, people will start to feel affection for virtual humans.
It’s about creating empathy. When we have conversations with people, we unconsciously mimic each other’s behaviour. Avatars should be able to do that. We want to get to the point where one cannot visually tell the difference between the real and the fake human. Until then, people will not buy into this. But I do think that in the future we’ll rely on our virtual selves a lot. To do Zoom calls like this one, to begin with, with a virtual version of me controlled remotely.
And what are the 3D technologies needed to make it happen? How does it work currently?
There are currently different ways of doing 3D. I believe that eventually, we’ll be able to encode what makes us who we are in what I call the “digital DNA”, which could be used in the future to really power the metaverse and allow us to set up our digital identity.
Digital DNA is a 3D model, but not based on a scan. Scans use too much (meta)data, it’s a frame-by-frame reconstruction, and a lot of companies are experimenting with that — but it’s hardly scalable. The digital DNA approach is more procedural: it deconstructs a face into shape, color, different layers of skin, and other details like a shiny forehead. These are the properties that describe you as a human, and they should be encoded into a system that allows us to recreate ourselves in a really high fidelity way.
Our challenge is to build a technology that enables us to generate an avatar based on a single selfie — without needing a 3D scanning booth. To do that, we’re building a database of faces, to reflect the full variety of eyes, noses, etc. And then use that to deconstruct your face.
There are quite some similarities between people, actually. Although we all want to be a unique snowflake, we’re children of someone else at the end of the day. So there must be a DNA tree we can recreate — something that we would love to uncover through our research.
Does that mean our personality will be captured, too? Already today, with the data we generate through our social media usage, the tech companies can discern intimate things like our sexual orientation and phobias. I wonder how far we can push in that direction.
I thought a lot about that myself, actually. We as humans are constantly evolving in terms of how we represent ourselves visually. It all started with cave drawings. Then we invented photography and made movies. It keeps getting more realistic.
There’s this incessant craving to achieve a better representation of ourselves. Maybe it has to do with the fact that, once we are able to clone ourselves, we basically become our own god.
Cloning means living forever and surviving as a species — which is an important thing for humans. So, creating a virtual version of yourself allows you to maybe have eternal life.
That could mean a chance for the metaverse to open up a second economy, where we have virtual humans that are not controlled by us, but have the right similar characteristics and actually make money for us!
This would allow real humans to really do whatever they want. It’s basically like universal basic income through a second economy. I’m really interested to see if that would happen. A lot of jobs will be lost to automation anyways, so it’s just the logical next step, I think.
The metaverse could improve lives in that sense, for sure. At the same time, do you think the metaverse should be viewed as a way to compensate for the shortcomings of our current system, as Mark Zuckerberg suggested at some point?
Yeah, it’s not that simple. That’s the reason why I don’t believe in VR, at the moment. Until the tech gets to the point where it looks like the Matrix / Ready-Player-One style, we see in movies, it’s just not going to have that impact.
Before then, all this technology is more like a tool to do the research I mentioned before — which is needed, because we as humans don’t actually know enough about ourselves. We learn about all sorts of things at school, but not really about that. We need a more holistic understanding, that goes beyond just the anatomy or the psychology. We tend to look at these separately. I’m also very much concerned with inclusivity — no one should be included in the list of those benefiting from the metaverse.
Guest bio:
Mao Lin Liao is a digital artist, with over 20 years of experience in creating hyper-realistic portraits of existing and fictional people. He’s the Founder & CEO of REBLIKA, specialized in crafting high-fidelity avatars for brands in the metaverse.
—
About the series: 🌐 1 Min in the Metaverse 🌐 is a LinkedIn original that aims to explore the metaverse through the eyes of those building it! Each interview comes with a 1-min sneak peek of key ideas as well as a full version long read.
💡 To learn more about Powder, visit our LinkedIn and website.
🗞 To keep up with the latest articles and news, sign up for the Powder Newsletter.