22.09.2021 — digital humans

The Rise of Digital Humans

As AI becomes increasingly common in our daily lives, how long before we are interacting with virtual people on a daily basis?

Tom Faber, Head of Content

When he first appeared in 1985, the world had never seen anything quite like Max Headroom. It wasn’t just the sparkling blue eyes or chin like an anvil that made this music television host unique—it was the fact that he was billed as the first ever computer-generated screen personality. Headroom captivated audiences immediately with his strange glitches and acidic wit, leading to talk show appearances, advertisements for Coca Cola and even a spot on Sesame Street. Yet behind this brash exterior was a secret—he wasn’t CGI or powered by AI at all. Headroom was played by a real actor, filmed and given a virtual sheen using practical effects and animation.

Since those early attempts we have got better at creating authentically digital humans. Over the past decade Hollywood has begun to more frequently augment characters with CGI or create them entirely from scratch in a computer, no actor needed. The technology is getting more convincing with every year. Why are so many resources poured into this alchemical pursuit? The payoff is huge—the more we can make virtual characters look real, the more viewers will empathise with them. But we have not yet achieved photorealism—humans are excellent at detecting when something isn’t a real person. If a single detail is wrong, we smell a rat.

Digital supermodel Shudu in a virtual photoshoot for British GQ magazine.

Epic Games, creator of Fortnite and popular 3D design software Unreal Engine, recently announced a product which promises a significant step forward in the field of creating authentic humans out of polygons. MetaHuman is a sophisticated character-creation tool similar to those found in role-playing video games, offering granular customisation options up to the visibility of eyeball veins or the amount of plaque on a tooth. It isn’t revolutionary because of its realism, but rather its speed and accessibility. While a digital studio might take weeks, MetaHuman can create a great-looking character in minutes without any coding knowledge or expensive hardware needed. It democratises the creation of a convincing digital human.

One industry that might benefit from this is the growing pool of companies which provide digital humans for use by corporations, generally as visual interfaces for customer services. Already companies like Amelia, ObEN, Soul Machines and Uneeq are offering the creation of avatars for professional use, although none look as good as a MetaHuman, and their speech sounds less natural than digital assistants on smartphones. It’s likely only a matter of time before virtual human faces are created to go along with the voices of Alexa and Siri, meaning they could give us emotional cues through facial expressions.

“We are already used to editing our faces with filters and apps like Facetune—the borders around what it means to be a “real human” online are rapidly dissolving”

These are not the first projects to use such technology for commercial ends. Look at virtual influencers, fictional characters used for advertising on social media, particularly in the fashion and gaming worlds. The biggest success is Lil Miquela, created by creative agency Brud. The virtual figure described online as a “19-year-old Robot living in LA” has accrued 3m followers on Instagram; partnerships with Prada, Nike and Samsung; and even “kissed” Bella Hadid in a Calvin Klein ad. Other notable successes are Shudu, the world’s first digital supermodel, created by Balmain for a 2018 fashion season, and Seraphine, a pink-haired musician launched by Riot Games to introduce a new character in their hit game League of Legends. As the influencer industry grows, it is logical that companies will turn to virtual stars who can be carefully controlled in ways that messy real humans cannot.

Then there are VTubers, a phenomenon that blends digital humans with real flesh-and-blood. This is a human wearing a digital mask—on YouTube or Twitch you see a CGI character or anime face, but behind the camera is a real person whose facial expressions and gestures are mimicked by the digital model. This technique grants streamers, often targets of harassment, the safety net of anonymity, while adding novelty to the world of live streaming. VTubers play games and chat with fans, sometimes making significant profits via donations and subscriptions on video platforms. Today there is even a Japanese talent agency, HoloLive, who represents a stable of VTuber talent.

Twitch streamer Pokimane in VTuber form.

While some viewers may find watching a digital face less engaging than a real human, these figures have a proven appeal with younger audiences. 47% of YouTube users said they are interested in watching content by virtual creators. These relations are an example of what psychologists call a “parasocial relationship”, when one person extends energy and emotions into a relationship with another who is unaware of their existence. This is how we empathise with characters in films and video games, and it is how a quarter of Gen Z and millennials are able to say that they would describe virtual influencers like Lil Miquela “authentic”. We are already used to editing our faces with AR filters on Snapchat, filters on Instagram, and apps like Facetune—the lines of what it means to be a “real human” online are rapidly dissolving. With these social changes in mind, we believe that the audiences are able to have intimate, meaningful experiences with the artists hosted in our digital music experiences.

In a 2019 TED Talk about the rise of digital humans, technologist Doug Roble said: “We’re on the cusp of being able to interact with digital humans that are strikingly real, whether they’re being controlled by a person or a machine.” The operative word here is “cusp”—most of these models still fall somewhere into the uncanny valley. While a MetaHuman may look almost realistic as a static model, it’s hard to make its movement authentic. Even the most advanced AI cannot think creatively, cannot empathise, cannot speak like a human. But it’s likely just a matter of time until we’re interacting with some form of digital human everyday, as unthinkingly as we might use a telephone banking app. “Communicating with computers will be like talking to a friend,” said Roble, without a trace of unease. It may seem ominous to some, but it’s looking increasingly inevitable.

Image Credit: MetaHuman, Epic Games / Shudu and Dagny, GQ / Pokimane VTuber model from Twitter

14.07.2021 — digital humans, our work

Building the Stage — Live Motion Capture

Salif had just one request: he wanted Back to the Future shoes.

Read more

All articles