A highlight of last night's MTV Video Music Awards (VMA) was Eminem performing alongside his late 90s alter ego, Slim Shady. After initially entering the stage flanked by an entourage of less convincing impersonators, the real Marshall Mathers (now bearded) was joined by what appeared to be a younger version of himself straight from 1999.
Of course, this was not the real Slim Shady but an AI-powered digital recreation, and one of the best deepfakes we've seen executed in front of a live audience. The performance was a perfect demonstration of the work done by a team that scooped the night's VMA for best VFX.
For more tech innovation, see our Next Gen Creative Tech page.
Metaphysic uses an AI-powered workflow and facial recreation to create digital characters. It worked with Eminem to bring Slim Shady back to life for the Houdini music video, which won the team the VFX VMA alongside Synapse VP and director Rich Lee.
Seeing the technology in a live setting was even more impressive. For the performance at the UBC Arena in New York, a stand-in acted out Slim Shady’s dance moves, while Metaphysic's applied its face swap process in real time, in-camera for broadcast viewers and on a large screen for the live audience. The result is a believable recreation of Slim Shady that stands up to close scrutiny.
The team stressed that the success of its work depends not just on the technology but also in the collaboration between the AI double and human performances. The convincing performances allowed the AI models to capture and enhance the nuances of Slim Shady’s look and behaviour.
In the Houdini video, which culminates in a rooftop standoff where the two characters merge into a unified Eminem, Marshall Mathers played both himself and Slim Shady. Metaphysic then used AI to synthesise the latter’s look. To achieve the high fidelity required, the team trained AI models on data from Slim Shady’s late 90s prime to recreate that young Slim Shady, complete with signature bleached hair. It took just three weeks to complete, showcasing the huge potential of AI for visual effects.
Get the Creative Bloq Newsletter
Daily design news, reviews, how-tos and more, as picked by the editors.
For more AI news, see this week's Adobe Firefly AI video reveal, which finally gave us a glimpse of what Adobe has planned for generative text-to-video in Premiere Pro and After Effects.
Thank you for reading 5 articles this month* Join now for unlimited access
Enjoy your first month for just £1 / $1 / €1
*Read 5 free articles per month without a subscription
Join now for unlimited access
Try first month for just £1 / $1 / €1
Joe is a regular freelance journalist and editor at Creative Bloq. He writes news, features and buying guides and keeps track of the best equipment and software for creatives, from video editing programs to monitors and accessories. A veteran news writer and photographer, he now works as a project manager at the London and Buenos Aires-based design, production and branding agency Hermana Creatives. There he manages a team of designers, photographers and video editors who specialise in producing visual content and design assets for the hospitality sector. He also dances Argentine tango.