• Home
  • Courses
  • education
  • About Us
  • Our Team
  • Privacy Policy
  • Data Protection Policy
  • More
    • Home
    • Courses
    • education
    • About Us
    • Our Team
    • Privacy Policy
    • Data Protection Policy

  • Home
  • Courses
  • education
  • About Us
  • Our Team
  • Privacy Policy
  • Data Protection Policy

At the Edge of the Metaverse - Research & Development

At the Edge of the Metaverse: Live Body and Facial Motion Capture for LED Wall Virtual Production, with Rendering of High Quality Digital Characters in Real-time 


A research project and case study in virtual production by Final Pixel

Download PDF

Behind the scenes

Disguise export from stage - real-time mocap & render

Real-time mocap and character interaction - dancing

Real-time mocap and character interaction - dance outtake

Real-time mocap and character interaction - outtakes for fun

Real-time mocap and character interaction - more outtakes

Check out this great video

White Paper - Text only

At the Edge of the Metaverse: Live Body and Facial Motion Capture for LED Wall Virtual Production, with Rendering of High Quality Digital Characters in Real-time 

A research project and case study in virtual production by Final Pixel

In partnership with:


Technology by:


THANK YOU TO ALL CAST AND CREW WHO WORKED SO HARD IN PRE-PRODUCTION, ON-SET AND ACROSS MULTIPLE COUNTRIES TO MAKE THIS SHOOT HAPPEN. WE ARE MAKING HISTORY TOGETHER.

Prepared by: Michael McKenna, CEO & Director of Virtual Production, Final Pixel. October 2021

Abstract

Throughout 2021 Final Pixel has been honing their LED wall virtual production workflow and pipeline through experience on multiple large scale commercial shoots for a range of clients - including ABC, Discovery and Shutterstock. The end-to-end nature of their production approach has meant they have been involved from initial concept to final post on many large scale virtual productions, and have developed a reliable workflow for shooting using high-end LED walls and cameras. 

Throughout this period their virtual art department (VAD) has been developing technologies that allow them to incorporate the use of digital animated humans or creatures into their virtual production shoots. For example, they have experimented with Epic’s metahumans and pre-recorded motion-capture animations as background digital humans on various shoots for Discovery. Ultimately they never made the final cut. In Final Pixel’s assessment, this typically stems from two key factors:

  1. The realistic and believable look of the character
  2. Realistic movement of the character

As a company specialising in virtual production for film, tv and advertising, they are excited by the opportunities working in real-time game engines can provide for the creative process when everything can be captured in-camera while shooting live-action. They’ve already seen how virtual production is moving elements of a traditional VFX pipeline with green screen or compositing into pre-production, and even being done ‘live’ on stage - like a show. The next evolution of this technology is to look at the elements which are still considered too heavy or complex to move out of the post-production workflow. Creature work is a big area for this, and also extremely important for storytelling narrative. Having live interactions between real-life actors and creature or character animation cements the creative process.

For this reason, they have been watching closely the development of real-time motion capture, which in recent years has reached new heights of fidelity, control and quality. So in Autumn 2021, they embarked on a research project, a ‘proof of concept’ to incorporate live-action motion capture of a detailed creature animation into their current successful virtual production workflow.

  

Objectives

Final Pixel sought to answer the following key questions:

  1. Could they create a real-time      computer-generated character of comparable quality to that used in      feature films and high-end TV series, and run it in real-time in Unreal      Engine?
  2. Could they then render that      creature on a LED virtual production stage?
  3. Could they give that creature real      looking actions driven from a choreographed move designed by a      Director?
  4. Could they combine body and      facial capture on the same creature using different mocap inputs?
  5. Could those movements be delivered successfully to the LED wall in      real-time through the use of live      motion capture data?
  6. Could they create     believable interactions between a live-action ‘real world’ actor and a      creature animation on the LED wall?
  7. Could they successfully incorporate this into their current workflow and pipeline,      importantly including Disguise and cluster rendering?

Other secondary R&D objectives:

● How practical is the use of steadicams for virtual production of this nature?

● What is the best pipeline for rigging characters for Unreal?

● Could they emulate some previous OSVP issues with multi-user editing to allow for further testing in a controlled environment?

● Could they enhance their capabilities using DMX controlled lighting from the real world lighting desk into Unreal?

Method

  1. Environment design and concept art
  2. Environment creation - virtual art department
  3. Creature pipeline - virtual art department
  4. Script, storyboard & choreography
  5. Virtual production stage workflow
  6. Motion capture workflow
  7. Production
  8. Camera, lights and other
  9. Post

  

Results

Final Pixel successfully achieved live body and facial motion capture streamed to Unreal and played through Disguise, using cluster rendering to render a high quality bespoke 3D character built using a traditional CG pipeline with an extremely high level of detail. They were able to create real-time interactions between the characters in-camera with no noticeable latency for the viewer.

Key Conclusions & Next Steps

The live body and facial mocap performance and composite were extremely smooth with minimal latency. So from the perspective of what they set out to achieve, this test was a huge success. It was designed to throw challenges at the systems and break things, which it did - and they now know what to do next time to improve - the very essence of doing R&D. 

Potential uses of this approach are many and significant:

● The most obvious is in the virtual production pipeline for film, TV and advertising, like at Final Pixel, allowing for live interactions between digital and human characters, all filmed in real-time and in-camera.

● Live-action mocap with creatures and characters which can then be replaced by full-scale CG in post - thereby capturing more ‘natural’ actor reactions and engagement verses use of a green screen.

● Create increased fidelity augmented reality plates using the enhanced functionality of Disguise as a stage management tool, in particular for live broadcasts.

To put this innovation in industry context, the recent advances in NVIDIA graphics processing with the A6000 driven render nodes played a huge part in making this possible now - it would certainly not have been possible to the same extent this time last year. What is most promising is that the issues in the key findings from this shoot are known issues relating more generally to the nature of all virtual productions using this workflow.

The next challenge to address with this approach is the successful direction of the mocap actors given they are unable to see their performance other than through a monitor. Future tests may employ a VR headset or some other immersive means to put the mocap artist in a world where they also can interact with the ‘real world’ actor smoothly.

Further research could be centred around the field of augmented reality and doing more to bring the digital characters into the foreground.

A brief behind the scenes video summary of the shoot can be found here.

Case Study

Contents

  1. Environment design and concept art
  2. Environment creation - virtual art department
  3. Creature pipeline - virtual art department
  4. Script, storyboard and choreography
  5. Virtual production stage workflow
  6. Motion capture workflow
  7. Production
  8. Camera, lights and other
  9. Post
  10. Results and discussion
  11. Conclusions and next steps
  12. Credits

  

  1. Environment Design & Concept Art

The initial concept and idea came from Final Pixel to create a simple scene that could lend itself to filming for virtual production so the team could focus the resources of a fairly limited budget on the technical aspects of the shoot. 

From Final Pixel’s experience in the past 12 months, low-key, darker scenes often allow for greater opportunities to hide the seam between the LED wall and foreground art. A dark scene also fitted the idea of having a monster/creature emerge from the shadows. It would also provide some leeway on rendering power. Doing it this way provides the team with a good platform to start from and then build, rather than introducing too many additional complexities to the production. Having a car and a wet down location also allows for good reflections from the LED wall to help tie the virtual and physical worlds together more.

From this initial concept discussion, the Production Designer (Francesca de Bassa) then set out a series of mood boards:


These were then translated into a designed scene which she digitally sketched out.


These sketches and mood boards were then handed over to the Final Pixel Virtual Art Department, led by VFX Supervisor (Steve Hubbard) to begin building a scene in Unreal which could run in real-time for virtual production.

Around this time the key communications channels were established between the Production Designer, Director of VP, VFX Supervisor and the VAD. This demo was also Final Pixel’s first ‘pilot’ run using popular VFX pipeline management tool ‘Shotgrid’ to control the development of assets and for project management.
 

Communication means:

Gdrive: The G-suite of tools allows Final Pixel to create more in-depth documents and provides ease of sharing, at this point used for sharing Google slides presentation with mood boards between the team in a controlled manner.

Discord: Discord describes itself as, ‘‘Your place to talk and hang out. Discord is the easiest way to talk over voice, video, and text. Talk, chat, hang out, and stay close with your friends and communities.’’

At Final Pixel, given the global, and remote nature of the teams, they often use Discord for exactly these purposes. It also initially provided a space for environment build creation and notes management, but as the company has grown they have needed a more controlled space for this (see Shotgrid below).

● Used for general communications about the project, the team started this on 1st September 2021 (approx seven weeks out from shoot day)

● The team created a server for the project, split into categories/channels based on different workloads to manage separate conversation threads. Eg Pre-vis, support, design, creature etc

Shotgrid: Shotgrid describes itself as, ‘’ShotGrid makes project management and pipeline tools that help creative studios track, schedule, review, collaborate, and manage their digital assets.’’

This is exactly what Final Pixel has begun using it for in their virtual production pipeline. This is an in-depth area, probably beyond the scope of this case study. The setup essentially mirrors their asset creation process. Their Shotgrid project has been custom-built for a virtual production workflow, which essentially strips out a lot of the unnecessary features included in a typical CG pipeline and changes naming and hierarchy. Some screenshots are below:

Screenshot of project overview:


Screenshot of environment build process overview:

2. Environment Creation - Virtual Art Department

The Environment was created in Unreal Engine v4.27.1. Final Pixel selected the latest version of Unreal available at the time of build to fully test it in their typical VP workflow. In the end, there were no issues with this version and it was completely stable for their typical setup.

There was just one environment for this shoot, to keep things simple. It was a parking lot.

Setup

Over the past couple of months, Final Pixel has been working on (and constantly updating) a blank project source that they use as a template for each of their projects. They pre-set a lot of elements to help kick off the project in a faster and easier way. 

This template is a blank project which includes two key things:

● They updated the project settings based on their preferences. These settings include enabling ray tracing, GPU light mass and other smaller tweaks. There are some settings in Unreal that make the program restart and compile shaders - which usually takes minutes. The main reason behind using a source template is to avoid compiling and save time by pre-set features that they need for virtual production.

● They created a start level. It includes all the actors they usually need for an environment. They also pre-set the post-processing volume to a certain extent to make it easier for the artists to kick off the creation process. This enabled them to save some time and energy at the very beginning. 

When all was set, the project source (the template) is pushed to Perforce. At this point, everyone could have access to the project source who had access to the Perforce server.

Perforce

Perforce is Final Pixel’s selected source control tool. It allows them to seamlessly share their Unreal project across multiple computers across the globe so that their international Virtual Art Department can work in the same environment at the same time - massively increasing the productivity and efficiency of their pipeline.

Each user has access to the depot on the cloud and can then ‘push’ changes to the project directly through an interface in Unreal Engine. They can also ‘pull’ other changes from artists from the depot at any time. The source control software prevents any assets from being worked on at the same time through a checkout/check-in process, also entirely integrated and managed within Unreal Engine.

Perforce via Assembla is the primary way Final Pixel transfers data associated with Unreal Engine (UE) video game projects. 

Final Pixel uses Perforce as a tool for:

● Artists in different remote locations to collaborate and work simultaneously on UE projects

● Distributing UE projects to off-set machines (for artists in remote locations)

● Distributing UE projects across on-set machines, which need a synchronised copy of the project

Their typical current setup looks something like this:


Build

WIth a solid set of concept art as a basis, it was a relatively easy job to build in Unreal from an artistic perspective. They used mainly stock assets (from Marketplace and Turbosquid) to fasten up the environment creation process - although they had more time than usual, they wanted to focus their resources on Unreal related tasks, instead of modelling custom assets.

Final Pixel’s VAD had several passes until they reached the final look. First, they started with picking the necessary stock assets (along with starting to model some of the custom assets). Then they blocked out the scene: they made a rough grey box environment that included all the defining elements: building, cars, phonebooth, trees, poles, lamps and a basic light setup to establish the atmosphere. 

After that, they started detailing the environment by putting in all the assets. Once they had finished with it and had their first draft version, they moved on to refining the parking lot. It took 20 rounds to reach the final look, mainly because they were constantly pushing the boundaries of the fog effect to make it work the way they wanted to.


Screenshot of the final optimised model build running at 140 fps at 2,560x1,440 on an RTX 3080

There were three things to keep in mind during building the environment:

  1. Design: To nail the final look      from an artistic standpoint
  2. Look on stage: It was not enough if it looked good on a standard monitor. It      also had to look great when pushed on the LED wall. These two displays      work differently so what looks good on the monitor, might not look good on      the LED screen.
  3. Optimization: Performance optimization usually plays an important role for      every virtual production project, so they had to make each component run      relatively cheaply.

Towards the end of the project, performance optimization played an even more important role than usual. Although the team had super high-end computers at their disposal, they knew the creature, especially the fur (more to come on this in later chapters) would take up the majority of the computing power. So they had to optimize the parking lot as much as possible.

During the optimization, it was not enough to get just the GPU related tasks right - it was essential to reduce draw calls (that's an important rule when it comes to performance optimization). Because of the marketplace assets (that contained relatively complex blueprints), they ended up having more than 5,000 draw calls - which is usually too much for virtual production projects. So their artists had to modify and simplify all the assets they had in their scene. After hours of work, draw calls were lowered to around 2,300 - it gave an instant boost for the GPU thread and the overall frame rate as well. After that, they could further tweak and finetune all GPU related calculations and in the end, they tripled the initial frame rate. And by optimizing the parking lot scene, they managed to give some more headroom for the creature performance-wise.

Blueprints/Functionality added to the environment

Final Pixel’s OSVP technical artists worked on some effects in the scene to enhance the storytelling and put the game engine to use. 

● Street light spotlight: They attempted to create a lighting mode where the street lights would turn and point at the monster based on DMX. However, this failed for a few reasons. 

○ Having all the movable light tanks the scene speed. 

○ There was a problem getting a reference to the character.

● DMX lighting:  Final Pixel intended to control lights using the DMX so that lights within the virtual scene could be controlled alongside real-world lights, and from the same DMX hardware desk. Final Pixel had preliminary discussions about how that might work – both from within the Unreal project (what blueprints might look like for example) and from the DMX hardware desk end (which art-net universe and channels these things would operate on, what fixtures would need creating within the DMX console etc). In the end, the moveable lights they had needed to be removed for optimisation and so this plan was parked. 

3. Creature Pipeline - Virtual Art Department

The creature pipeline followed an abridged version of a typical CG pipeline for creature work. The team involved were based mainly in the US across different locations working remotely together in tandem with the UK OSVP team and creative direction.

A. Brief/Concept

B. Sculpt/Model

C. Retopo

D. Texture

E. Rigging

F. Groom

G. Final

  1. Brief / Concept

The creature was to be a cross between a werewolf and ape, with ape-like features of long arms and hunched body, yet a biped walking on two feet. Some references are below which were discussed with the creature artist. The creative direction came from the Production Designer as well as VFX supervisor and Director of Virtual Production.

Going into this, the team knew the fur (the groom) would be the biggest challenge when it came to rendering, so there was some degree of flexibility on the creature front should they need to scale back the number and type of hairs. 

They tried using ‘MIRO’ as a shareable mood board for references:

The character artist also produced his own:


  1. Sculpt/Model

The very early initial ideas were done by Jnr Technical Artist and 3D modeller (Diona Marina):


When Final Pixel’s lead character artist (Judah Kynard) took over, he used this as a starting point and evolved the design:


Notes and feedback were given over shotgrid/discord remotely.


Judah explains his approach to the modelling of the hero creature (soon to be nicknamed ‘Fluffy’ for reasons which will become apparent…):

“For a bit of context I come from a creature character background and moved towards realistic human characters as I progressed through my career. That being said I was excited and comfortable to be making a creature again. The sculpting process went smoothly and having a good grasp on anatomy always helps when sculpting creatures.”

  1. Retopo

“My approach to retopology is streamlined. Over the years I‘ve created cage meshes for things like legs, fingers, elbows, knees, torsos and shoulders. This method allows you to quickly have functional geometry in those areas that you trust to deform when animated. From there it’s mainly filling in the mesh and following the cage guidelines.” 


  1. Texture

“I rig the character and texture around the same time. This way I can modify weights on the rig to get less stretching in areas or modify the texture for the same result. It helps me to not get attached to an iteration of texture because it may not work when the rig is finalized.”


“Bringing the character into Unreal Engine and creating materials would have to be the most satisfying part of the entire process. I applied my knowledge for realistic human skin rendering for the material set-up. I also like to add tessellation material and height to the normal map. This allows me to smooth out the edges of the characters and also add extra definition in the normal map that can be lost when baking. I went on to add a hue modifier to change the tonality of the skin in different lighting scenarios.”


  1. Rigging

“The last and most challenging bit was retargeting the rig for use with animation assets. I opted to rename the rig completely for UE4 compliance and ease. The issue I ran into was since our creature had a pronounced pose, the joints of the animations applied had a set distance that didn't match with our rig’s bone length and rotation. This problem was not apparent in the mocap testing since those movements are organic and have no baked path. The main solution to this is to match the characters' pose as closely as you can to the mannequins. There seemed to be a bug when changing the mannequin pose to match the creature pose which doesn't allow it to stay set in the modified position.”


  1. Groom

When doing groom there are typically two options - cards or curves. In line with the brief, the team opted for the most difficult to push the technology to its limits. Final Pixel’s groom expert (Nick Burkard) saw this as a pretty straightforward process, ‘’just rounds of artistic iteration!’’

Once complete they had c. 2million hairs rendered in real-time.

Groom progress:


Troubleshooting collision (WIP creature) - short video here.

Final Creature


See a sample video of the creature being driven by live body and facial mocap data and recorded via Disguise on-set during filming here 

4. Script, Storyboards & Choreography 

In parallel with the environment creation and creature build, the concept and idea was taken to script before being turned into storyboards. Final Pixel’s Creative Director (Christopher McKenna) came up with the following.

“It should be somebody broken down. We are with him trying to start his car. It doesn’t start. He gets out, opens the hood, then hears a noise… at first we think it is something in the engine… but we realize it’s coming from behind a bush… he start’s hurrying to try and get the car started, tension rises. Then the monster appears behind him… he is terrified. He tries to get back in the car, but can’t (he left his keys inside…) THEN the monster sees something (Sticker on window? Something that indicates our hero is a b-boy style dancer…) And the monster challenges him to a dance off (no dialogue of course.) They dance… it looks great. Guy is happy. Then the monster eats him...”

For this piece, the aim was to create a short film of c. 90secs-120secs in length. The Director, (Francesca de Bassa) created storyboards to tell the story.


In the end, there was much more here than they could shoot in a day. While these would have made for a tense and engaging sequence, to ensure they were getting enough practice with the live mocap they decided to trim down the tension building section to as short as they could, and weigh more of the days shooting to dancing. 

This meant we needed a longer choreographed dance timed suitably for the short.

5. Virtual Production Stage Set-up

The project was shot at Digital Catapult’s Virtual Production Test Stage (VPTS), a joint venture with Target3D. VPTS is a full end-to-end VP studio aimed at research, development, access and skills for SMEs, practitioners and startups. VPTS is part of StudiosUK, a cluster of interconnected research facilities leveraging 5G and future networks capability to enable the exploration of convergent production technologies - such as the use of high density LED walls, motion-capture, real-time rendering engines and new forms of real-world capture. 

Full details of the stage technical specifications are below:

LED WALL

Leyard CLI2.6 led tiles - 18 columns 7 rows

Pixel pitch 2.6mm

3456 x 1344px @ 50Hz

9m x 3.5m flat wall

LED wall is calibrated to Rec709 colour space

2x Colorlight Z6 gen 2 controllers, controlling left and right side of the stage

Laptop with control software connected via USB to the Z6

Input signal is HDMI 2.0, 3840x2160p50, 10 bit, YCbCr 4:2:2, Limited Range from the Disguise VX2 media server or one of the UE render nodes

2 frames latency @ 50fps

[In an effort to reduce latency these were newly upgraded led controllers to 2x Colorlight Z6 gen2 units. This enabled the LED wall latency to drop from four frames to two frames verses prior shoots at the stage.]

DISGUISE 

1x Disguise VX2 (driving the led wall and the Hyperdeck)

1x Disguise RX v1 node (connected to the VX2 via point to point 25gbps fiber link for uncompressed Renderstream quality)

[Note: This standard equipment at the stage was upgraded for this shoot by Final Pixel bringing in a Disguise rack which housed three RX2s with A6000 GPU, used in place of this setup.]

SOUND

System working at 48kHz sample rate

1x Yamaha QL1 desk 

3x Sennheiser EW300-500 G4 radio receivers

3x Sennheiser SKM300 bodypack + lapel mic - wireless transmitter

3x Sennheiser EW G4 e845 handheld - wireless transmitter

2x Rode NTG2 shotgun mic (truss rig)

1x Intel Mac Mini with DVS license and Apple Logic X for multitrack recording

2x QSC CP8 foldback speakers

CAMERA TRACKING

1 x NCAM camera tracking

1x Optitrack camera tracking

1x Loled Indiemark lens encoder

1x Tilta Nucleus-M FIZ unit

VIDEO CONTROL

System works in the following configurations:

> Control genlocked at 50fps, camera genlocked at 50fps

> Control genlocked at 50fps, camera genlocked at 25fps

> Control genlocked at 30fps, camera genlocked at 30fps

1x Evertz 5601 MSC

1x Rosendahl Nanosync HD genlock unit 

2x BMD Smartview 4k 12G display

1x BMD VideoHub 6GSDI matrix

1x BMD Hyperdeck Studio

Input signal is HDMI 2.0, 3840x2160p50, 10 bit, YCbCr 4:2:2, Limited Range from the Disguise VX2 media server

2x Samsung UE55TU7020 - 55in display HDMI 2.0 4k display on Unicol wheeled stand

OPTITRACK

16x Optitrack Prime 17W cameras rigged on truss

OTHER PCs

1x Mocap PC (Optitrack / Motive )

4U Rackmount Chassis

Gigabyte X570 Gaming X AMD AM4 X570 Chipset ATX Motherboard

AMD Ryzen 7 3700X Eight-Core Processor with Wraith Prism RGB LED Cooler

16GB (2x8GB) Corsair Vengeance LPX Black DDR4 3200MHz Memory Modules

Production RTX 3080 Graphics Card

Samsung 970 EVO NVME M.2 (SSD)

Seagate BarraCuda 3TB Desktop 3.5" Hard Drive (HDD)

CORSAIR RMx Series RM750x (2018) 80 PLUS Gold Fully Modular ATX Power Supply

Novatech 300Mbps 802.11n Wireless-N PCIe Adapter

Windows 10 Professional

1x Unreal Editor Workstation

4U Rackmount Chassis

Gigabyte X570 Gaming X AMD AM4 X570 Chipset ATX Motherboard

AMD Ryzen 7 3700X Eight-Core Processor with Wraith Prism RGB LED Cooler

128GB (4x 32GB) Corsair Vengeance LPX Black 32GB DDR4 3000MHz Memory

Production RTX 3080 Graphics Card

2x Samsung PM983 Solid state drive 1.92 TB M.2 PCI Express 3.0 x4

CORSAIR RMx Series RM750x (2018) 80 PLUS Gold Fully Modular ATX PSU

Novatech 300Mbps 802.11n Wireless-N PCIe Adapter

Windows 10 Professional

[Final Pixel also upgraded setup with two edit workstations for rapid light builds in Unreal on set, one running 2 x 3090 GPU, the other 1 x 3090 GPU]

NETWORK

CONTROL NETWORK - 10.20.30.x

1Gbps

DHCP server

OSC control via Companion

Internet

Remote control of all PCs in the studio via TeamViewer

Connected to studio wifi

RENDERSTREAM NETWORK - 5.10.15.x

25Gbps Fiber network

Point to point link between the RX node and the VX2 Disguise machines.

MOCAP NETWORK - 192.168.137.x

1Gbps

Network with all Optitrack cameras streaming to Motive PC

DANTE NETWORK - 169.254.112.x

1Gbps

Connected to the QL1 desk, Mac mini (Logic) and Disguise VX2

NDISPLAY / TRACKING NETWORK - 20.30.40.x

Streaming of tracking data from Optitrack and nCam

Livelink data streaming

nDisplay cluster communication

Dedicated wifi access point for local data streaming 


6. Motion Capture Workflow

The key objective with the motion capture was to stream the highest quality possible live movements from an actor on stage. For this, the team entrusted in Target3Dand their years of experience in motion capture to take the lead on the mocap workflow. The Virtual Production Producer from Target3D (Dan Munslow) led the team, while Final Pixel provided 3D / Unreal expertise and software development skills to help troubleshoot some of the thornier data issues and establish the workflow into their typical VP setup. Together it was an exceptional team and the collaboration on set to drive innovation was a real joy to behold. 

Overall the goal was to run both body mocap data and facial mocap data live through the system. This was done using Live Link:

Live link

It seems this is generally quite fiddly. However, the team got it working. From an Unreal point of view, there were specific challenges to animating a creature off of a human mocap. The body worked surprisingly well, but the team was very limited in the face by the amount of bones they had in the original rig. In future, anything modelled would be worth adding in extra bones for things like the eyebrow and the ears. Even if they do not map directly to the human face, the team is able to use them to create new mapped expressions. 

The first issue to crack here was using the UE Livelink plugin in combination with Disguise. It was initially thought that the Livelink plugin for Unreal, whose purpose is to receive and process mocap data and would be crucial for the shoot, was fundamentally incompatible with Disguise. Livelink appeared to work fine within the standard UE editor, but was defunct when UE was running via Disguise. There turned out to be two key underlying causes:

● When running via Disguise in ‘standalone’ mode, Unreal must be directed to the appropriate network adapter on which to listen for the mocap data. It’s possible to do this by supplying custom command-line arguments to Unreal via Disguise, though it later turned out that Disguise offers a simpler and more reliable mechanism through its’ own interface to direct Unreal Instances to specific network adapters (which presumably uses the same command-line mechanism under the hood). Using the Disguise interface for selecting these target networks is the solution the team settled on.

● When running in ‘standalone’ mode on certain machines, the Livelink plugin failed to load altogether. If a project contains animation blueprints that use nodes drawn from the Livelink plugin, on occasion these blueprints would attempt to load on project startup before the Livelink plugin (which is a prerequisite for them to work) had itself loaded. This would lead to a fatal error. The solution was to force the plugin to load earlier in the project startup sequence by manually modifying the LoadingPhase property within the uplugin file. 

For the Mocap Lead (Harry Piercy) and Tech Artists (Dominic Maher, Ed Bennett) the Livelink troubleshooting was difficult and time-consuming. There’s very little help out there on forums etc. for this kind of thing, and though there are lots of people reporting the same or similar problems there’s not much in the way of solutions, and absolutely nothing about the LoadingPhase solution the team finally settled on. Extremely detailed analysis of the logfiles was necessary, and lots of little related pieces needed investigation like:

● How Unreal handles UDP traffic and how to enable it in Standalone mode.

● Standalone mode command-line arguments, how to supply them, and what they can be used for (i.e. directing the engine to customised config files that should be used instead of defaults, supplying console commands to be executed on startup, enabling logging).

● How to pass these command-line arguments when running UE via Disguise.

● Setting up test environments on Final Pixel premises to replicate the stage environment and continue to diagnose.

With these issues resolved, attention turned to the latency of the mocap data. To start with, the delay between mocap actor movement and the onscreen skeleton response was very noticeable. Fabio and Graham from Target3D made adjustments to the network and machines to attenuate that latency to roughly a third of what it had been originally.

To get the data to work like this, the team implemented the standard live link set up:

● Use motive (Optitrack) to capture a performer and solve the skeleton from markers.

● Stream motive skeletal data to MotionBuilder on a local machine.

● Retarget skeletal data to the character rig.

● Stream character rig data from MotionBuilder to the disguise render node running UE4.

● Apply character rig data to the skeleton in Unreal via the live link plugin. Save the live link sources as a preset and direct the project to said preset.

Rehearsal phase (wb 11/10)

The team had assumed that now Livelink was set-up and working within UE, getting the facial mocap data in via the Livelink iOS app would be a cinch! That turned out not to be the case…

There were actually several issues to deal with here:

● Facial mocap via the Livelink app does use Livelink, but it also uses the Apple ARkit plugin. Both need to be enabled.

● Facial mocap does not need configuring in a ‘preset’ in the same way as body mocap data.

● The Livelink plugin produces a console error (of the hard red error type) when used for facial mocap. This is a complete red herring and should be totally ignored, but can be very confusing.

● The iOS app allows the user to configure destination IP addresses that the mocap data should be sent to. It allows more than one IP to be configured – but Unreal must be running on only one of the destination IPs at any one time or unpredictable results will follow. Also, only one instance of Unreal must be running on the destination computer, meaning that ‘Standalone’ mode executed through the Editor will never work, because the background Editor instance is still commandeering the mocap data. This can result in extreme confusion but is of course another red herring as in a live scenario (i.e. running Unreal through Disguise), no Editor will exist and the ‘standalone’ instance will be the only instance running.

The facial data stream was extracted and broken down into individual components to animate relevant aspects of the creature's face, such as mouth movements. The technical mocap side of things was mostly complete by adding the final element of eye control. 

Animation Setup: In the animation blueprint the team split the rig at the neck controlling the body by using the live link mocap data directly. For the face, the team extracted the specific data from the iPhone and then transferred them to variables. The mouth movement connected to a blend space that shifted between two preset poses. For the eyes, the team manually edited their rotation based on an Alpha. To make the eye blink they used a dynamic material that just put a skin texture over the eyes. 

They exported the facial mocap data from Livelink as a separate feed and then merged it within the animation blueprint. The face had a blend space for the jaw which would blend between an open and closed jaw depending on how open the mouth was. The eyes were separately rotated depending on the rotation of the left eye of their mocap artist. Finally the eyes were open and closed by changing the material on the face, where if the eye were under 50% open they would change the eye material to be the same as the flesh material. 

Rendering: In order to successfully render the creature they used Disguise’s ‘Cluster Rendering’ feature in Unreal. This allowed them to specify a render node, and specify a specific object in their scene (the creature) to dedicate the full might of the NVIDIA A6000 to render their highly detailed groom and character movement in real-time. 

Sound sync: The challenge was that the mocap character needed to dance on music, synchronizing with a real actor on stage. Graham Keith (Technology Solutions Engineer) led on ensuring they had the right setup. Because of the mocap latency on screen, the system was set up as follows:

● Mocap actor dancing on music on wired headphones (wireless IEMs best for next time), sound feed with no delay.

● Actor dancing in the led volume, listening via speakers. The sound feed was delayed to make the mocap character stay in sync with the music.

● To delay the system, a metronome was played through the system, and Final Pixel asked the mocap actor to clap in sync with their headphone feed. Delay on the speakers was adjusted to make sure the mocap character on the led clapped in sync with the PA.

Casting - Motion Capture Performer

Potential casting options were explored between the creative team and the Mocap Movement Director (Ryan Mercier), and exactly what type of performers would be appropriate. Having a mocap performer who would be the opposite of the dimensions of the creature was an interesting approach to evaluate the effectiveness of retargeting. The notion of having a smaller framed performer driving the creature would be a great opportunity to highlight the capabilities of blending the tracking in mocap, the retargeting of the skeleton in Motionbuilder, through to the monster dancing on the XR stage. 

A shortlist of mocap artists that would be capable of creating a great monster performance and transition into a fun dance routine was identified by the casting support from Ryan, staying within the limitations of the monster's size and shape. The creative team were happy with the list and they were able to cast Marta Lune in the role.

The VP Producer (Hanah Draper) and Director (Francesca De Bassa) met with Virtual Production Producer (Dan Munslow) and Ryan to discuss in further detail casting the roles of Mocap artist and Performer. 

Casting - Live-Action Hero Actor

Casting identified Raymond Bethely, an actor and dancer who had credits with a similar sentiment to what the storyboards were looking for (see Dom Dolla - Take It (Official Music Video)). Luckily Ray was available and able to meet with the Director to discuss the role further.

On the rehearsal for the shoot with the actors, Final Pixel set the volume for Marta to play in and got her into the mocap suit so they could identify any obstacles or opportunities to amend anything from the mocap tracking side. A big obstacle for Marta was to understand the latency from her movement to the screen that Ray was interacting with. This meant working out how and when she would start her movement. They decided to record the mocap data, which meant that they would need to communicate with the VP Supervisor (Ellie Clement) when they were rolling and cutting etc.

7. Camera, Lights and other

The camera chosen was a Blackmagic Ursa 12k, which was already rigged to Ncam and owned by the stage. Lenses were also provided and for ease the team used them rather than bringing any extra in. The aim with this shoot was to do a proof of concept around the mocap and creature pipelines. Final Pixel already has well-established approaches to camera choice and lenses so it was not the focus of this project, so that budget could be put on the main objectives for the research. 

CAMERA RIG

1x Blackmagic Ursa Mini Pro 12k - EF mount

1x Blackmagic Video Assist 12G SDI monitor

1x Manfrotto 509HD tripod

1x Hague K14 Pro-Jib

1x Tilta MB-T03 Matte Box

S+O MEDIA KIT: 

ARRI LMB-15 CLIP-ON MATTEBOX 

TIFFEN ROTATING POLA - 

PV BLACK SATIN FILTER SET - 

PV INCLUDES 1/8, 1/4, 1/2, 1, 2 

TERADEK BOLT PRO 500 

TX/RX TERADEK BOLT SIDEKICK II 

SONY OLED 17" MONITOR W/ PUP STAND 

SACHTLER 30 TRIPOD

LENSES

DZOFILM Vespid Prime lens 25mm-35mm-50mm-75mm-100mm

DZOFILM Pictor 20-55mm and 50-125mm zoom lens

All encoded via Loled 

STEADICAM

1 x GPI Pro cinelive ℅ Austin

The camera was rigged with the NCAM system on a tripod dolly right up until the morning of the shoot day. The camera and tracking system was then all transferred onto the Steadicam rig on the morning of the shoot.

The team had some issues with the transfer of the rig over to the steadicam, which took longer than expected and ultimately required re-work on the tracking map also. 

Director of Photography (Keith Gubbins) found that the operator (Austin Phillips) found it very challenging to operate with the loom and weight of the rig. The loom had to be held almost straight horizontally behind the camera in order to allow the steadicam to pan smoothly. Finding a way to minimise the loom, using thinner cables or wireless where possible would be an advantage. Rigging the camera and box centrally would help with balance too. Overall though, using steadicam was a massive advantage as it added movement and quick repositioning. 

LIGHTING (Studio provided)

1x GrandMA3 lighting desk

4x Litepanels Gemini 1x1 (2x truss rigged as backlight, 2x ground stacked on tripod)

2x ClayPaky AXCOR Spot 300 moving light (truss rigged)

All fixtures are DMX controlled

LIGHTING (℅ Final Pixel /via Gaffer)

LED

2X ORBITER

2X VORTEX 8

1X AX5

2X LITE MAT 4L

STANDS 2X ASL

2X DOUBLE WIND UPS

4X AMERICAN STANDS

2X LOW BOYS

6X C-STANDS+ 4X ARMS

4X SHOT GUNS+ 2X ARMS

GRIP 4X MAGIC ARMS AND K-CLAMPS

1X TURTLES

4X CARDELLINI CLAMPS

2X 2K SPIGOT 2

4X 5/8 SPIGOT

6X SAFETY BONDS

2X 2INCH POLY HOLDER

2X 1 INCH POLY HOLDER

1X SMALL, MEDIUM, LARGE POLE CAT

1X MEGA BOOM ARM

1X C-STAND BOOM

2X JUMBO KNUCKLES

8X KNUCKLE

DISTRIBUTION 2X 5K DIMMER

2X 2.5KW DIMMER

4X PRACTICAL DIMMERS

6X 32AMP 50FT

4X 25FT 32AMP

2X K-9

2X 63AMP FDU

4X 63AMP 50FT

4X 50FT 16AMP

6X 25FT 16AMP

6X 16-4X13

6X Y-CORDS

6X 13-16JUMPER

2X 13AMP RCD

8X SANDBAGS MEDIUM STEPS

FILTER KIT

COLOUR KIT INCLUDED

CONSUMABLES

FOAMCORE

POLY

SPARE BULBS

BOLTON BAG

Gaffer (Robbie Smith) 

8. Production

Giving new and fresh talent the opportunity to gain experience and understanding of this kind of production was key to Final Pixel’s approach. As a research & development project, it also allowed them to treat the budget economically without the expense of skill. 

Final Pixel spoke with the industry’s top diary services and agents and engaged with a crew that expressed an interest in virtual production and wanted to learn how the workflow differs from a standard motion shoot. 

With the industry at an extremely busy time, due to the halt in productions experienced during the height of the COVID-19 pandemic, there were a few production challenges. Because of the nature of virtual production, there is a degree of additional prep necessary to ensure all tests and checks are completed for a successful production. However, because virtual production is such an exciting new prospect for many crew and because Final Pixel is exploring the concept of interacting a meta/motion capture cast with a real-life actor, Final Pixel was able to find a crew that was eager and willing to work at a reduced rate for the length of time Final Pixel would need to prep for the shoot day. 

There are a number of additional technical departments to a standard motion shoot and it is important that kit and equipment lists are circulated repeatedly to ensure that there are no holes or areas that could be overlooked. Lenses have to be calibrated with the tracking system and this takes time, so the schedule needs to accommodate for this. Luckily at the Digital Catapult and Target3D Virtual Production Test Stage the camera and tracking system with lenses is already set up. This allowed for a shorter prep period which subsequently worked for our budget. 

With virtual production, it is important to ensure that there is a detailed conversation happening between the different departments. 

The constant discussion - whether before in the design process via Discord, in preparation between production and HOD’s, during the shoot between Director and 1st AD, or when finalising in post production - requires a diligent level of meticulous communication to ensure the delivery of a successful film. 

The DOP, Keith Gubbins, and Gaffer, Robbie Smith, needed to have a thorough plan with the Director of Virtual Production, Michael McKenna. The beauty of Unreal allows the virtual world’s levels to be manipulated relatively quickly to match the foreground. This collaboration allows for a scene that can be matched harmoniously. 

Bringing motion capture into the workflow when designing the meta-monster allowed the Director to culminate an established character personality. Casting was a key part of this exercise as it was important for the demo to focus on the interaction between the real life and virtual cast members. Final Pixel engaged with motion capture artist, Marta Luné, to embody the meta monster enabling the team to realistically create and explore the antagonist. Francesca directed Marta and actor, Raymond Bethley, alongside the DOP to strategically place the performance ensuring that the two worlds combined seamlessly. 



Final Pixel

Final Pixel is a Global Creative Studio specialising in Virtual Production with LED walls and game engine technology.  www.finalpixel.com


Visit our website

continued...

  

Bringing together the virtual world with the foreground requires an understanding of how best to blend these media. Fortunately, the Director, Francesca DeBassa, has a background in art department, directorial and virtual production and so was able to build all of these areas into her vision. She worked alongside Kate Parnell, the Art Director who under Francesca’s brief dressed the foreground to compliment the virtual world and draw them together. Due to the size of the budget, both had to work creatively to ensure there was enough dressing to do so. 

Team

Final Pixel operates a hugely international and remote supported workforce on virtual production shoots. This allows the team to pool expertise and deliver incredible efficiency in the Virtual Art Dept. Final Pixel had people working on this shoot from locations across the US, UK and Europe. The on-set team is mainly a local one, but always supported remotely with deep expertise. 

Diary of shoot day - 20th October 2021

8:00 

Arrivals at the Digital Catapult and Target3D Virtual Production Test Stage Guildford

Crew and cast arrive on set, systems are up and running, LED already on and environment running. Set is dressed and lit

Zoom link is set up 

The brain bar are setting up the multi user edit across the edit workstation and the Disguise RX nodes. 

The mocap volume is calibrated, a tracking wand is swung around the space to get all of the optitrack cameras to map the space, it has three reference points and as long as two or more cameras can register. When moving the tracking arm around the space, the light rings around the cameras reflect the movement. A ground plain is set.

The camera is then transferred to a steadicam from the tripod, all the wiring and additional units have to be correctly derigged and then installed again on the new mount. 

9:15

Set cam is now mounted but no tracking data from NCAM yet

No mocap data is being received after calibration for the creature

DMX lighting not finished so the team will have to move on without if not working in the next few minutes. 

9:30 

Multi user workflow parked as causing too many issues again

The team have camera tracking, and mocap data 

But no creature in the scene

9:50 

NCAM zero point moved when the team transferred onto steadicam, so have to recalibrate in disguise.

Trying another way, resetting the NCAM zero point. The team had a problematic adapter feeding NCAM from camera. This mostly fixed it, but it is still out slightly, trying to recover again with two QR codes to help accuracy but not much improvement.

Back to recalibration spacial in Disguise.

Added more points for calibration and pulled in enough for in camera shooting.

10:50 

First rehearsal with all tech semi-functional

Test 1 - steadicam, too much moire, lighting in Unreal needed updating to blend the divide more, too much light on the floor so clear line

11:45 

Unreal light adjustments

Disguise mask added to blend bottom of screen.

12:25

First take 

Issue with music playback 

12:30

Takes 

2:15 

Break for lunch 

3:15 resume 

6:00 

Testing interaction between live and mocap actors, high fives, scratching, Rock Paper Scissors. Biggest issue was timing and latency, which would have improved if the team had more rehearsal time for later shots, or if latency could be improved more on the hardware/software side.

Wrap live actor

Continue to shoot mocap actor on LED

Freestyle mocap 

Recording MR set through disguise.
 

9. Post

The post process was fairly straightforward - the team treated it like a location shoot - edit / mix grade. 

There was no VFX work done in post - everything captured as Final Pixel in-camera from the shoot.

10. Results and discussion

Overall, there were great working relationships on set – good camaraderie and this feedback came from many on-set. Ensuring clear roles and responsibilities from the outset, a clear hierarchy in the virtual production department, and ensuring plenty of communication all played its part in this and the value gained through this cannot be understated on what are often very complex shoots.

Key learnings and suggested next steps 

On the shoot day, the first part of the day was picking up where the team left off in rehearsal. The actors spent more time working out the blocking and choreography. Marta would eventually wear headphones that would have the soundtrack starting earlier than the main soundtrack. This meant that the choreography between Ray and the monster would ultimately sync up. It also meant that the mocap artist was unable to hear much in between takes including the cues from the first ad. Feedback from the artist was that after wearing the headrig and headphones for a few hours became uncomfortable and heavy. 

The mocap artist was finding it difficult to see herself in the small monitor and wasn’t able to position herself well in the scene (the steady cam was also making it a bit more complicated). Maybe a monitor for the actor could have been useful too to position himself better and be able to interact with the monster. 

For the Director, it was a bit complicated to use the steadicam as it is difficult to plan the movements and where the mocap artist will be in the background. The monster should have been much closer to the Unreal camera in order to move less with the parallax of the background. Having the two actors interact and pretend to be in the same world works better with fixed cameras in which is easier to put them in a credible position where they could look like facing each other. Having more time and being able to plan more mathematically the actors’ positions and where the camera has to be could have worked better. The approach should be much different than the one of normal live action, though, more planning on the camera movements and actors positions. 

As the takes went on, the performance and framing became slicker. Both Ray and Marta had managed to establish a dialogue that had to flow through the several lines of software and pull off an entertaining story. 

Art / Direction

● General: the scenes where the monster was distant were working well, the issue of not being able to focus 100% sharp on the monster and defocus on the actor were limiting. The limitation of having the chance to shoot only from one side is quite a big one, of course the world could be turned on the other side too but would mean recreating a second real environment and plan the movement to be able to link the different cuts. There are many ways the team could approach the scene build differently to introduce greater flow between characters next time. The possibility of having the monster walking around the character would help to hide the fact that he’s on a screen. The main limitation is given by the fact that the monster will always be on the back of the actor and never pass in front. Everything could work perfectly with 20% of post production added to the virtual production so that the monster could be in front of the artist a few times to trick the viewer, or it could be recorded live in camera using an AR plate in Disguise. There could be the option to also put the actor (recorded) in the Unreal world to have him behind the monster in some scenes.

● Foreground/Background tie-in: In this case there wasn’t really enough time as the team would have liked to adjust the real background to the 3D one, and the team was missing some foreground elements which would have given more depth. This is not related to the test. It is more about the limited time/budget Final Pixel had to operate with and putting more effort into the tech proof of concept rather than set builds - which Final Pixel knows how to do well. Always having an element which the team can find in both of the worlds could help the two characters to look like being in the same environment (eg: a set of car keys or any object they could touch and move around). Having a bigger object positioned in different shots in the same place to play with (eg: the actor sits on a chair in one scene and in another scene the monster moves the chair and sits on it or does something with it). A continual theme with VP and a big takeaway from this project was that it's not enough to make a 3D environment look good. The optimisation/performance aspects must be kept in mind at basically every creative decision. 

Creature Build

Overall this process was pretty smooth as the team was working with a very established feature film pipeline and experienced artists. Where it comes to the interaction with a VP workflow the team had a few takeaways for future:

● Make sure bind poses are being used - the model wasn’t put in a T-pose for delivery

● Build in enough time for mapping - would work out better than when done on the fly

● Ideally a feedback loop is running the animation cycles back to the artist to refine further - using data from the stage during test / rehearsal days. Some problems with collisions would have been better fixed doing this.

● File management protocols needed tightening up - eg delivery to Gdrive verses uploading assets to Shotgrid

● In future will use a delivery requirements doc - where files are saved and in what format

● In Unreal the groom needed to be popped into an asset - important to set workflow for who is doing final assembly in Unreal to avoid delays

Unreal Process

● R&D process: Being what it is, when engaging in a Research and Development exercise, a lot of time was lost to development and fixing problems. This took away from time available for actual filming and the final output could have benefitted from this significantly. 

● Version control: In the rehearsal phase caused delay - workflow schedule and plan needed to be set much earlier and incorporated before everyone is on set. Expanding the Shotgrid pilot to incorporate more of the creature pipeline would have been good and will be actioned going forward. At times, artists had to duplicate the map/level because the version control process was being circumvented.

● Optimization: Framerates were low even in editor, so really needed the extra rendering power provided by cluster rendering and the NVIDIA A6000 cards (as discussed throughout this case study).

● Fog problem: There was a problem with the renderstream with the creature in it having a dark fog around it. In the end it was eliminated by moving it to its own streamed level.

● Project iterations: The time needs to be reduced between generating a fix and publishing it to the wall. The requirement of backing up to Perforce could be removed, instead doing reconcile offline the changes when time was available. A 30 second test time was implemented. If the team wanted to see if something would work on the wall, a piece of code would have it revert back after 30 seconds. This allowed the team to test settings for things like the DMX whilst not disturbing the shoot or requiring as many reboots. 

Disguise + Cluster Rendering workflow:

● Splitting the creature into its own level:Splitting the creature into its own level provided the team with a couple of new challenges. The main one was that the team could not get the lighting and shadows working correctly. The team attempted to add a secondary monster to the main level without the groom however it caused problems, so they opted to not use shadows in the end. 

● Adding light to creature blueprint: In order to get creatures correctly they ended up adding the light to the creature blueprint. This worked well, and they even managed to open up the variables to D3 so a lot of the light function could be easily controlled. 

● Problem referencing character: When adding code in the editor, things were working very well, however, the same code would not play on the wall. The main problem seemed to be that the team were having trouble getting reference to the actor and object within the level. 

● Making a solution to object location:Final Pixel tried to control a number of different variables exposed to D3. There are issues between relative space and world space. This could be solved by writing some script do regardless of where the elements are moved the object in D3 would always match up with the transforms of the actors in the scene no matter their location and rotation. 

● Shaders compiling: Shaders were constantly compiling on Disguise on a few of the days. Cumulatively, this sucked up a lot of time.

● Networking issue:  It's important to always obtain a full technical schematic in good time before the shoot. 

Mocap Workflow:

Once set, the mocap workflow worked incredibly smoothly, and was exciting to watch a live creature being driven within the virtual production stage by a motion capture artist. 

● MotionBuilder made a lot of the Inverse Kinematics (IK) work very easy. 

● Streaming to Disguise: Aside from the detailed discussion above in the mocap section, one other issue was solved by changing the d3 net “preferred synchronization adapter” of all d3 machines (VX and RX). This used to be the Renderstream fiber network (as suggested by Disguise for their workflow). This was 5.10.15.x. Changing this to the network on which motion builder streams data (10.20.30.x) fixed the issue.

OSVP general Workflow improvements:

● LED Stage builds - It's critical to always allow enough time for control and integration between LED/camera - at least two days of build and configuration to get to a satisfactory place. Day three would then be about loose ends and calibration completion before introducing the Unreal team to check performance, lighting, colour etc. More time would have been helpful, but the team were operating within budget constraints.

Following some hardware upgrades and changes close to the shoot date, the VP test stage team had issues with visible scan lines on the LED wall and did not have time to resolve them before shooting. These will need to be investigated as they were not fixed. Colorlight or the new sync gen put in line may have been the root of the issue - it's possible it wasn't correctly configured and the timings/phase were off (different outputs can be set independently). 

In addition, Final Pixel received a powerful purpose-built demo rack from Disguise, including 2x VX2 and 3x RX2 machines. Due to the loudness of the units, it was decided to place them in an isolated room. The cable run between the studio control and the Disguise rack was about 10m, however, due to the lack of HDMI 2.0 active cables and genlock outputs available, they could only use 2x RX2, feeding into their own VX2. The frame rate was set to 30fps to improve performance, which was further improved by dropping the Disguise MR set backplate mapping from 3840x2160px to 2560x1440px. 

Noticeably, even though Disguise was giving low framerate warnings on the mocap character Renderstream feed, it looked ok on screen. FPS sometimes as low as 17fps vs expected 30fps. The shoot happened with this warning still happening (discussed further below). There were also some issues with banding of fog - giving the team a poor bit look rather than 10bit look. The suspected root of many of these issues, and knock-on effect on the environment optimisation, was the networking setup of Disguise (discussed more below). 

● Perforce setup on the on-set machines was a bit problematic and time-consuming. During stage setup, the edit machine was defunct for quite some time, partly due to version control issues & network download speeds. One useful adaptation to come out of this exercise is a modified Perforce process. Normally the edit workstation connects to the source control from within UE and leaves it connected, so all the changes that are made are quietly being tracked in the background. When regularly syncing with Disguise this isn’t desirable because source control is disabled before each sync to de-risk the process. Constantly logging into and out of source control within UE at this time is time-consuming at exactly the moment when time is most precious. So when actively shooting it’s a much better idea to leave source control disconnected, and ‘manually’ collate and submit the changes to Perforce periodically using the Reconcile Offline Work feature. Version control is simple in principle but complex in practice – another good reason for always nominating an on set Perforce admin who can perform complex operations whenever one is needed. Final Pixel used part of this time to develop some further ground rules for Perforce for our teams.

○ To charge one person with the responsibility for Perforce setup across all on-set machines

○ To establish a naming convention for workspaces

○ An operating convention for prep phase/shoot phase

● Headsets: headset could be useful for communication and should be trialled.

Multi-User Edit

● Multi-User: This allows users to make changes on the fly, but has limitations. When finished making changes, the user can save and sync the project across. A multi-user session can be open for however long needed. At the end of a session, stop the D3 session, save edit station pc, then sync across. The multiuser crashed during operation and lost some work. This is a very useful tool, but it still needs to be defined how it will be used when multiple artists are working in a Disguise environment in a fast-paced shoot.

N-Cam

● Camera Tracking: The zero point for this ncam setup was the bottom right-hand corner of the LED screen. However, it seemed to introduce inaccuracies as it is only one point of reference, which contributes to the tracking repositioning itself if the data is lost or optics are blocked at any point, causing the map to be out of position. The zero point can be set to anywhere and has in the past been set to a point on one of the QR style icons on the ceiling. When the tracking data is lost on N-cam it has to use the QR style icons on the ceiling to correct its position and can’t rely solely on the tracking stars to position itself again. This is not completely accurate though, as this problem combined with the zero point issue as mentioned above can cause issues in the spatial data/map, meaning whenever they do a lens change or have a loss of tracking data, they might also have to calibrate the Disguise system because of this.

DMX Lighting

● DMX: Final Pixel designed DMX controlled fixtures in Unreal with the idea they would create a spotlight for the creature to dance in. They even had a blueprint running that would make the lights follow the creature around the scene. However, due to the limitations on rendering they were unable to implement this. They needed to change the movable lights to static to counter-performance issues when playing through Disguise. They then set up lighting control of Unreal fixtures via DMX, sending the same RGB values as they were sending to the disco effects lights. There seemed to be an issue with the colours not matching but they never really got time to troubleshoot it. There was an ongoing issue with getting light onto the character which meant in the end this could not be implemented as planned.

Summary of key issues resolved:

● Ability to play iPhone livelink through Unreal and play in Disguise 

● Ability to play Optitrak mocap data through Unreal in standalone mode

● Cluster rendering in Disguise of an object (the creature) on one RX2 render node and the background (parking lot) on another RX2 render not, composited on screen and in-camera - however, they had to scale down to 2k (fine for wall resolution)

● Successfully ran Ncam on steadicam - non gimbal

● Successfully rendered complex groom in Unreal - niagra based effects of 2m individual hairs

Root causes of problems

This is an analysis of the root causes of some of the issues to come up in this research and development activity. On doing that across many of the issues identified above, there were two key root causes for which solutions and recommendations can be identified forward, which both had a cumulative impact on time, productivity and quality.

First root cause: Setup of the stage, in particular, Disguise Hardware and Networking Optimisation:

The root problem of many of the key issues that arose in this R&D test shoot was that the Disguise set up on the stage could potentially have been further optimised for this type of project. The addition of the higher-spec kit was a late hardware change so there was some degree of decision making and set up late in the day. It seems that while higher overall render speeds were acheived, the setup may have ultimately led to a greater performance limitation than should have been expected given the hardware available. As a result, the knock-on impacts on the shoot objectives were compounded with dropped frames resulting in even higher degrees of optimisation required than normal, setting off a cumulative impact over time across other departments - eg:

● Decreased resolution from 4K to 2.5K - which for the purposes of a test were passable but may not be acceptable for a production environment.

● Additional scene optimisation - which led to removing all but one moveable light, impacting the effectiveness of lighting on the subject and losing the DMX controlled lights which was due to illuminate the creature in a more appealing way.

● Required to split cluster rendering of creature into a separate level. It's still unclear what the root cause of this issue was but Disguise support advised setup may not have been optimal.

● The knock-on impact of the above meant more time was spent trying to fix the lighting on set in a pressured, slow environment.

● Also, the lost time on pre-light day troubleshooting the rendering issue meant these lighting fixes had to happen in the morning of the shoot day.

● All of this ultimately compressed the performance time with the actors and the range of tests we could have achieved.

The good news is - this is completely solvable with greater lead time and planning on hardware and a performance boost and smoother overall workflow is expected next time as a result on what is a highly complicated and innovative setup. 

Key conclusion 1: It solidifies Final Pixel’s best practice of always setting clear hierarchies with virtual production on-set and establishing who is responsible for what area at the earliest stage of the process. Having a technical lead and responsible ‘owner’ of the hardware setup alongside a clear technical diagram ahead has proven to be very helpful in this regard. This clarity on how the stage is wired for all the technical team and support partners to see to assist with rapid debugging when time is of the essence deep into a shoot. Any opaqueness here may lead to potentially critical problems in a real shoot environment, especially where there is any complexity beyond simple back-plates / static Unreal environments, and high degrees of innovation as here.

Unfortunately due to the low budget nature of this shoot and limited time/resources available, the discussions on physical setup and related changes in hardware were happening very late in the prep-phase ahead of the shoot day. It is always important there is sufficient time and budget in a VP shoot to allow sufficient technical prep of this nature - in particular where innovation levels in the shoot are high.

Second root cause: Speed of Unreal iterations playing on the wall via Disguise

The team had many potential solutions to problems stemming from the optimisation challenges compounded above, however the process of making changes in Unreal and then pushing the update to Disguise, as opposed to being able to edit quickly live on the LED wall, meant for a long iteration and debug process. 

Key conclusion two: Obtaining a 100% stable and reliable multi-user editing setup or alternate tools to enable live changes are important not just for aesthetics, client changes and creative direction - but also help significantly with debugging and can greatly improve the likelihood of addressing issues which come up on set. If anything, this is a more important reason for having the functionality than the former. 

Assessment against Objectives

Objectives

Final Pixel sought to answer the following key questions:

  1. Can a real time computer generated character of comparable quality      be created to that used in features and high end TV and run it real time      in Unreal Engine?

YES

  1. Can that creature be rendered on a virtual production stage?

YES

  1. Can that creature give real looking actions driven from a      choreographed move designed by a Director?

YES 

  1. Can body and facial capture be combined on the same creature using      different mocap inputs?

YES

  1. Can those movements be delivered successfully to the LED wall in      real time through use of live motion capture data?

YES

  1. Can believable interactions be created between a live action ‘real      world’ actor and a creature animation on the LED wall?

IN SOME CASES

  1. Can this successfully be incorporated into Final Pixel’s current      workflow and pipeline?

YES

11. Conclusion and next steps for further research

What is most promising is that the issues in the key findings from this shoot are largely known issues relating more generally to the nature of virtual production using this workflow. The live body and facial mocap performance and in-camera composite was extremely smooth at times with minimal latency, demonstrating clear potential when incorporated into a more typical shoot with appropriate budget. 

This successful proof of concept has established a clear workflow for creating in-camera composites using creatures of comparable quality to traditional CG pipelines rendered in real time on LED for virtual production, driven by live body and facial motion capture data. 

Potential for uses of this approach are many, and potentially significant:

● The most obvious is in the virtual production pipeline for film, TV and advertising, like at Final Pixel, allowing for live interactions between digital and human characters, all filmed in real time and in-camera.

● Live action mocap with creatures and characters which can then be replaced by full scale CG in post - thereby capturing more ‘natural’ actor reactions and engagement verses use of green screen.

● Use to create increased fidelity augmented reality plates using the enhanced functionality of Disguise as a stage management tool, in particular for live broadcasts.

The next challenge to address with this approach is the successful direction of the mocap actors given they are unable to see their performance other than through a monitor. Future tests may employ a VR headset or some other immersive means to put the mocap artist in a world they also can interact with the ‘real world’ actor smoothly.

From the perspective of what Final Pixel set out to achieve, this test was a huge success. It was designed to throw challenges at the systems and break things, which it did - and the team now knows what to do next time to improve - the very essence of doing effective research and development.

Credits

This project wouldn’t have been made possible without the Virtual Production Test Stage (VPTS) run by Digital Catapult and Target3D. The whole Final Pixel team are hugely thankful for all the support, collaboration and facilities provided by the VPTS. You guys rock!

  

CEO & Director of   Virtual Production


Michael McKenna


Final Pixel

 

Creative Director


Christopher McKenna


Final Pixel

 

Executive Producer


Monica Hinden


Final Pixel

 

Motion Capture Performance   Artist


Marta Lune


℅ Final Pixel

 

Hero Dancer


Raymond Bethely


℅ Final Pixel

 

Director


Francesca de Bassa


Final Pixel

 

Director of Photography


Keith Gubbins


℅ Final Pixel

 

Virtual Production Producer


Hanah Draper


Final Pixel

 

VFX Supervisor


Steve Hubbard


Final Pixel

 

Virtual Production   Supervisor


Ellie Clement


Final Pixel

 

Snr Technical Artist


Ed Bennett


Final Pixel

 

Jnr Technical Artist


Dom Maher


Final Pixel

 

Jnr Technical Artist


Diona Potopea


Final Pixel

 

Virtual Production Stage   Technician


James Codling


Final Pixel

 

Lead Environment Artist


Andras Ronai


Final Pixel

 

Lead Character Artist


Judah Kynard


Final Pixel

 

Groom Artist


Nick Burkard


Final Pixel

 

Digital Catapult Producer


David Johnston


Digital Catapult

 

Managing Director


Allan Rankin


Target3D

 

Virtual Production Producer


Dan Munslow


Target3D

 

Mocap Movement Director


Ryan Mercier


Target3D

 

Chief Technology Officer


Ashley Keeler


Target3D

 

Head of Virtual Production   Test Stage


Fabio Rinaldo


Target3D

 

Mocap Lead


Harry Piercy


Target3D

 

Technology Systems Engineer


Graham Keith


Target3D

 

Production Manager


George Murphy


℅ Final Pixel

 

1st AD


Elise Martin


℅ Final Pixel

 

Assistant 1


Roxie Oliveira


℅ Target3D

 

Assistant 2


Nazia Zaman


℅ Target3D

 

Steadicam operator


Austin Phillips


℅ Final Pixel

 

Focus Puller


Sam Harding


℅ Final Pixel

 

Camera Assistant


Joe Thompson


℅ Final Pixel

 

Gaffer


Robbie Smith


℅ Final Pixel

 

Action cars


Steve Royffe


℅ Final Pixel

 

DIT


Kato Murphy


℅ Final Pixel

 

Sound


Max Frith


℅ Final Pixel

 

Production Design


Kate Parnell


℅ Final Pixel

 

BTS Filmmaker


Peter Collins


℅ Target3D

 

Editor


Carlos Almonte


Final Pixel

At the Edge of the Metaverse: Live Body and Facial Motion Capture for LED Wall Virtual Production, with Rendering of High Quality Digital Characters in Real-time : Version 1.0 published 13th December 2021

Data Protection Policy  nal Pixel  - All Rights Reserved. 


Data Protection Policy     Privacy Policy   

  • Contact Us

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

DeclineAccept