Fake Tom Cruise


Chris Ume is a European VFX artist living in Bangkok who has shot to international attention with his Tom Cruise Deep Fake videos or DeepTomCruise posts. Chris has demonstrated a level of identity swapping that has surprised and delighted the community in equal measure. Since he started posting the videos of Miles Fisher’s face swapped as Tom Cruise his email inbox has been swamped with requests for advice, help, and work. What has caught the imagination of so many fellow artists is how the TikTok videos have Fisher breaking the ‘rules’ of Neural Rendering or Deep Fakes. In the videos, DeepTomCruise pulls on jumpers over his face, takes on and off glasses, hats without any seeming concern about occlusion, and how DeepTomCruise regularly has his hair or his hand partially over his face.

Ume uses as his backbone the free AI or Machine Learning (ML) software DeepFaceLab 2.0 (DFL), but the process is far from being a fully automated process. For each short video Ume spends15 to 20 hours working to perfect the shot and sell the illusion. While anyone can download the software, the final clip is anything but a one-button-press solution. As with all VFX, the artist’s role is central and what looks easy and effortless on-screen is actually complex and oftentimes challenging.

Each video starts with a conversation with Tom Cruise impersonator Miles Fisher. It is actually Fisher who films himself and sends the videos to Ume. There is never a tight script, Ume has explained the known limits and invited Fisher to push the boundaries. Ume does not direct the actor, and to date, only one video has had to be reshot. In the original version of the lollypop clip, Fisher too often came very close to the camera, turned, and dropped in and out of frame.

Ume uses DFL 2.0 which no longer supports AMD GPUs/OpenCL, the only way to use it is with nVidia GPU (minimum 3.0 CUDA compute level supported GPU required) or CPU. Ume uses an A6000 nVidia card. The actual software version of DFL 2.0 that Ume uses is faceshiftlabs, which is a Github a fork of actual DFL code.

[embedded content]

Fisher films the base clips on his iPhone and sends the files to Ume. The resolution is not high similar to 720P but at the end of each process, Ume performs an UpRes. He prefers to do this on the combined comped clip as he feels often times it is a mismatch in sharpness and perceived resolution that makes a deep fake look unrealistic.

A key part of Ume’s process is Machine Video Editor. MVE is a free community supported tool for deepfake project management it helps with data gathering to compositing, it fully supports DeepFaceLab and data format, Ume uses it extensively for the supporting mattes that are required for the later compositing work.

When doing any such ML the training stage is time-consuming and Ume normally allows “2 to 3 days at least, maybe more, depending on how quickly the shot clears up” to tackle a new subject such as DeepTomCruise. While it is his work on DeepTomCruise that most people know, Ume has done many similar projects with different subjects and targets.

The focus of MVE is neural rendering project management, and it allows Ume to have all his DFL training material in a single project folder, and data for data scraping, extracting, with advanced sorting methods, set analysis, augmentation, and manual face and mask editor tools.

The program helps with automatic face tagging avoids the need for manual identification of eyebrows, eyes, noses, mouths, or chins. The program is not open-sourced, but it is free.

DFL 2.0 has improved and optimized the process, which means Ume can train higher resolution models or train existing ones faster. But the new version only supports two models – SAEHD and Quick 96. There is no longer any H128/H64/DF/LIAEF/SAE models available and any pre-trained models (SAE/SAEHD) from 1.0 are not compatible. Ume only uses SAEHD, he sees Quick96 as just a fast rough test model and while he has explored it, DeepTomCruise uses SAEHD.

[embedded content]

All the compositing is currently done in AfterEffects. Ume is interested to explore NUKE, especially with its new ML nodes such as Copycat, but for now, he knows AE so well it is hard to shift applications. Some of the software in his pipeline only runs on PC so this is the platform that Ume does all his work.

As part of the compositing, Ume has experimented with changing hair color, patching skin textures, and noticed interesting artifacts from the training space into the solution space of DFL. For example, when Miles leans very close to the camera, the lens distortion is sometimes not reflected in the solution. This means the new DeepTomCruise has a jaw that is the wrong apparent width and is not receding fully with its distance to the lens. A face close to the camera at eye level will have the chin relatively thinner due to the wide-angle effect, but this is rare to see in actual Tom Cruise footage as the actor is seldom shot this way. In these cases, Ume uses the jaw much more from Miller than DeepTomCruise.

Ume is very collaborative working with VFX houses and also all the major artists working in the Deep Fake space. A group including users such as ctrl shift face, futuring machine, deephomage, dr fakenstein, the fakening, shamook, next face, and derpfakes , who collectively represent some of the best known usewrs on Github all share ideas and work to demonstrate the sort of amazing work that can be done with neural rendering technology.

Miles Fisher has sent respectful emails to Cruise’s management explaining that his & Ume’s work is just to explore and educate Deep Fakes and neural rendering technology, and he has vowed to never use DeepTomCruiseto promote a product or cause. Ume’s primary aim is to educate as to what is possible and build his own career in visual effects. “My goal was to work with Matt & Trey, (South Park) which I am now doing. My next goal is to work with ‘The Lord of the Rings‘ team. I grew up watching those movies over and over again,” Ume explains, admiringly referring to Weta Digital.

Share if you enjoyed this post!



Source link

Twin Peaks Meets Fargo Meets Alf: Resident Alien


Syfy’s Resident Alien starring Alan Tudyk (Firefly), who plays “Dr. Harry Vanderspeigle,” an alien who has taken on the identity of a small-town Colorado doctor. CoSA VFX is the primary visual effects vendor for the 10-episode series and VFX house Artifex Studios added 685 shots, amid COVID-workflow adjustments in 2020.

CoSA VFX Animation

Resident Alien has offered multiple opportunities for CoSA VFX‘s Animation team to shine, from environments to CG characters and everything else in between, and also for the team to have quite a bit of fun in the process.

“It was fun. They let us experiment to find the character. We did a lot of things that were probably even a little outlandish, looking back at it,” comments CoSA’s animator Roger Vizard, adding that Harry the alien had a lot of emotional values that we do they try to touch upon in the animation. “He’s a brilliant character to work with”.

Getting the alien on a horse was probably the biggest challenge the animators faced, as the nine-foot-tall alien was certainly larger than the stunt actor riding the horse in the raw footage. To accomplish this, the team watched and studied how the horse would react and animated Harry with matching interactions. Additionally, the horse was match-moved for seamless integration with dynamic elements and the animated character. This featured sequence for the third episode, where an ordinarily classic Western moment becomes something else entirely.

CoSA’s lead animator Teri Shellen commented that he hopes this kind of work continues to come to the studio, as these types of shots are very rewarding for the animators to tackle. “When we get episodes like this, they’re treasured, because we actually really get to get into character development and really push that field in our studio,”

CoSA also worked on many of the environment shots and in particular the pilot episode’s ship crashing to earth after being hit by lightning.

Artifex

Artifex was involved early in setting key environments for “Resident Alien” and continued to add embellishments or build-outs dependent on scene requirements. In episode 6, the studio augmented stock plates to add sweeping snow-covered mountain ranges, while episode 8 saw a build-out of practical glaciers into a full environment.

The glacier sequence in episode 8 in particular demanded that virtually every moment was touched in some way by the VFX team. Artifex used matte painting, CG extensions, smoothing and alteration of the set, and texture work to subtly add snow and ice.

Artifex also did creature animation, in episode 7 their team created a CGI octopus which Alan Tudyk interacts with through aquarium glass. Their conversation suggests that Harry’s species and octopuses are closely related, something which Harry himself later states to Asta Twelvetrees. Nathan Fillion previously co-starred with Tudyk in the 2003 series Firefly and its concluding film Serenity. Fillion is not the only fan favourite guest star on the show, Sci-Fi acting legend Linda Hamilton plays General McCallister, a high-ranking U.S. military officer.

The photo-real octopod inspires a later scene in episode 9 Artifex had to supplant Tudyk’s leg with a tentacle. For the scene, the team painted out what was visible of Alan Tudyk’s leg, and added the CG leg, complete with flailing animation, and interaction with the bacon.

“The animation had to find a sweet spot that suited the vocal performance accompanying it,” said Artifex VFX Supervisor Rob Geddes. “We wanted to be careful to provide a grabbing visual without taking the viewer out of the moment by being too intentionally cartoonish or farcical.”

For the Day for Night (DFN) sequence above, the scene was shot in full daylight, but needed to shift in the edit, making it into a night sequence.  This required extensive roto, with matte painted elements to introduce lit building interiors, and streetlights.

Rounding out the work was the inside of the spaceship in episode 10, the season finale. Artifex designed and integrated the spaceship interior inside and around the green screen set.

Inside the Spaceship

The project spanned roughly a year due to delays imposed by COVID, with both internal and external adjustments being made to reflect the realities of working remotely.

Hardware/software used during the project included Maya / V-Ray for modeling, animation, and rendering; tracking in Syntheyes, matte painting in Photoshop, compositing in Nuke, scheduling and production tracking in ftrack, and Meshroom for photogrammetry.

Season 2

Executive producer and showrunner Chris Sheridan (Family Guy) and his talented creative staff have announced that the show has just been picked up for a second season and will return soon to Syfy.

Share if you enjoyed this post!



Source link

Congrats to the Winners of the VES awards


The Visual Effects Society held the 19th Annual VES Awards, the prestigious awards recognize outstanding visual effects artistry and innovation in film, animation, television, commercials, and video games. It celebrates the amazing work of  VFX supervisors, VFX producers, and all the artists who bring the work to life.

Winners of the 19th Annual VES Awards are as follows:

Outstanding Visual Effects in a Photoreal Feature
THE MIDNIGHT SKY
Matt Kasmir
Greg Baxter
Chris Lawrence
Max Solomon
David Watkins


Outstanding Supporting Visual Effects in a Photoreal Feature
MANK
Wei Zheng
Peter Mavromates
Simon Carr
James Pastorius


Outstanding Visual Effects in an Animated Feature
SOUL
Pete Docter
Dana Murray
Michael Fong
Bill Watral


Outstanding Visual Effects in a Photoreal Episode
THE MANDALORIAN, The Marshal
Joe Bauer
Abbigail Keller
Hal Hickel
Richard Bluff
Roy Cancino


Outstanding Supporting Visual Effects in a Photoreal Episode
THE CROWN; Gold Stick
Ben Turner
Reece Ewing
Andrew Scrase
Jonathan Wood

Outstanding Visual Effects in a Real-Time Project
GHOST OF TSUSHIMA
Jason Connell
Matt Vainio
Jasmin Patry
Joanna Wang

Outstanding Visual Effects in a Commercial
WALMART; Famous Visitors
Chris “Badger” Knight
Lori Talley
Yarin Manes
Matt Fuller

Outstanding Visual Effects in a Special Venue Project
THE BOURNE STUNTACULAR
Salvador Zalvidea
Tracey Gibbons
George Allan
Matthías Bjarnason
Scott Smith

Outstanding Animated Character in a Photoreal Feature
THE ONE AND ONLY IVAN; Ivan
Valentina Rosselli
Thomas Huizer
Andrea De Martis
William Bell


Outstanding Animated Character in an Animated Feature
SOUL; Terry
Jonathan Hoffman
Jonathan Page
Peter Tieryas
Ron Zorman


Outstanding Animated Character in an Episode or Real-Time Project
THE MANDALORIAN; The Jedi; The Child
John Rosengrant
Peter Clarke
Scott Patton
Hal Hickel


Outstanding Animated Character in a Commercial
ARM & HAMMER; Once Upon a Time; Tuxedo Tom
Shiny Rajan
Silvia Bartoli
Matías Heker
Tiago Dias Mota

Outstanding Created Environment in a Photoreal Feature
MULAN; Imperial City
Jeremy Fort
Matt Fitzgerald
Ben Walker
Adrian Vercoe


Outstanding Created Environment in an Animated Feature
SOUL; You Seminar
Hosuk Chang
Sungyeon Joh
Peter Roe
Frank Tai

Outstanding Created Environment in an Episode, Commercial, or Real-Time Project
THE MANDALORIAN; The Believer; Morak Jungle
Enrico Damm
Johanes Kurnia
Phi Tran
Tong Tran

Outstanding Virtual Cinematography in a CG Project
SOUL
Matt Aspbury
Ian Megibben

Outstanding Model in a Photoreal or Animated Project
THE MIDNIGHT SKY; Aether
Michael Balthazart
Jonathan Opgenhaffen
John-Peter Li
Simon Aluze

Outstanding Effects Simulations in a Photoreal Feature
PROJECT POWER
Yin Lai Jimmy Leung
Jonathan Edward Lyddon-Towl
Pierpaolo Navarini
Michelle Lee

Outstanding Effects Simulations in an Animated Feature
SOUL
Alexis Angelidis
Keith Daniel Klohn
Aimei Kutt
Melissa Tseng




Outstanding Effects Simulations in an Episode, Commercial, or Real-Time Project
LOVECRAFT COUNTRY; Strange Case; Chrysalis
Federica Foresti
Johan Gabrielsson
Hugo Medda
Andreas Krieg

Outstanding Compositing in a Feature
PROJECT POWER
Russell Horth
Matthew Patience
Julien Rousseau

Outstanding Compositing in an Episode
LOVECRAFT COUNTRY; Strange Case; Chrysalis
Viktor Andersson
Linus Lindblom
Mattias Sandelius
Crawford Reilly

Outstanding Compositing in a Commercial
BURBERRY; “Festive”
Alex Lovejoy
Mithun Alex
David Filipe
Amresh Kumar

Outstanding Special (Practical) Effects in a Photoreal or Animated Project
FEAR THE WALKING DEAD; Bury Her Next to Jasper’s Leg
Frank Iudica
Scott Roark
Daniel J. Yates

Outstanding Visual Effects in a Student Project
MIGRANTS
Antoine Dupriez
Hugo Caby
Lucas Lermytte
Zoé Devise

VES Special Awards

Cate Blanchett presented the VES Lifetime Achievement Award to award-winning filmmaker Sir Peter Jackson – along with a star-studded tribute from Andy Serkis, Naomi Watts, Elijah Wood, Sir Ian McKellen, James Cameron and Gollum.


Sacha Baron Cohen presented the VES Award for Creative Excellence to acclaimed visual effects supervisor, second unit director, and director of photography Robert Legato, ASC.

Share if you enjoyed this post!



Source link

VP with Digital Humans & Darren Hendler


Epic Games has released the second volume of its Virtual Production Field Guide, a free in-depth resource for creators at any stage of the virtual production process in film and television. This latest volume of the Virtual Production Field Guide dives into workflow evolutions including remote multi-user collaboration, new features released as well as what’s coming this year in Unreal Engine 5, and two dozen new interviews with industry leaders about their hands-on experiences with virtual production.

One such contributor is Darren Hendler at Digital Domain.

Darren Hendler

Hendler is the Director of Digital Domain’s Digital Humans Group. His job includes researching and spearheading new technologies for the creation of photoreal characters. Hendler’s credits include Pirates of the Caribbean, FF7, Maleficent, Beauty and the Beast, and Avengers: Infinity War.

Can you talk about your role at Digital Domain?

Hendler: My background is in visual effects for feature films. I’ve done an enormous amount of virtual production, especially in turning actors into digital characters. On Avengers: Infinity War I was primarily responsible for our work turning Josh Brolin into Thanos. I’m still very much involved in the feature film side, which I love, and also now the real-time side of things.

Josh Brolin from the Thanos shoot

Digital humans are one of the key components in the holy grail of virtual production. We’re trying to accurately get the actor’s performance to drive their creature or character. There’s a whole series of steps of scanning the actor’s face in super-high-resolution, down to their pore-level details and their fine wrinkles. We’re even scanning their blood flow in their face to get this representation of what their skin looks like as they’re busy talking and moving.

The trick to virtual production is how you get your actor’s performance naturally. The primary technique is helmet cameras with markers on their face and mocap markers on their body, or an accelerometer suit to capture their body motion. That setup allows your actors to live on set with the other actors, interacting, performing, and getting everything live, and that’s the key to the performance.

The biggest problem has been the quality of the data coming out, not necessarily the body motion but the facial motion. That’s where the expressive performance is coming from. Seated capture systems get much higher-quality data. Unfortunately, that’s the most unnatural position, and their face doesn’t match their body movement. So, that’s where things are really starting to change recently on the virtual production side.

Where does Unreal Engine enter the pipeline?

Hendler: Up until this moment, everything has been offline with some sort of real-time form for body motion. About two or three years ago, we were looking at what Unreal Engine was able to do. It was getting pretty close to the quality we see on a feature film, so we wondered how far we could push it with a different mindset.

We didn’t need to build a game, but we just wanted a few of these things to look amazing. So, we started putting some of our existing digital humans into the engine and experimenting with the look, quality, and lighting to see what kind of feedback we could get in real-time. It has been an eye-opening experience, especially when running some of the stats on the characters.

At the moment, a single frame generated in Unreal Engine doesn’t produce the same visual results as a five-hour render. But it’s a million times faster, and the results are getting pretty close. We’ve been showing versions of this to a lot of different studios. The look is good enough to use real-time virtual production performances and go straight into editorial with them as a proxy.

The facial performance is not 100 percent of what we can get from our offline system. But now we see a route where our filmmakers and actors on set can look at these versions and say, “Okay, I can see how this performance came through. I can see how this would work or not work on this character.”

How challenging is it to map the human face to non-human characters, where there’s not always a one-to-one correlation between features?

Hendler: We’ve had a fantastic amount of success with that. First, we get an articulate capture from the actor and map out their anatomy and structures. We map out the structures on the other character, and then we have techniques to map the data from one to the other. We always run our actors through a range of motions, different expressions, and various emotions. Then we see how it looks on the character and make adjustments. Finally, the system learns from our changes and tells the network to adjust the character to a specific look and feel whenever it gets facial input close to a specific expression.

At some point, the actors aren’t even going to need to wear motion capture suits. We’ll be able to translate the live main unit camera to get their body and facial motion and swap them out to the digital character. From there, we’ll get a live representation of what that emotive performance on the character will look like. It’s accelerating to the point where it’s going to change a lot about how we do things because we’ll get these much better previews.

How do you create realistic eye movement?

Hendler: We start with an actor tech day and capture all these different scans, including capturing an eye scan and eye range of motion. We take a 4K or 8K camera and frame it right on their eyes. Then we have them do a range of motions and look-around tests. We try to impart as much of the anatomy of the eye as possible in a similar form to the digital character.

Thanos is an excellent example of that. We want to get a lot of the curvature and the shape of the eyes and those details correct. The more you do that, the quicker the eye performance falls into place.

We’re also starting to see results from new capture techniques. For the longest time, helmet-mounted capture systems were just throwing away the eye data. Now we can capture subtle shifts and micro eye darts at 60 frames a second, sometimes higher. We’ve got that rich data set combined with newer deep learning techniques and even deep fake techniques in the future.

Another thing that we’ve been working on is the shape of the body and the clothing. We’ve started to generate real-time versions of anatomy and clothing. We run sample capture data through a series of high-powered machines to simulate the anatomy and the clothing. Then, with deep learning, we can play 90 percent of the simulation in real-time. With all of that running in Unreal Engine, we’re starting to complete the final look in real-time.

What advice would you give someone interested in a career in digital humans?

Hendler: I like websites like ArtStation, where you’ve got students and other artists just creating the most amazing work and talking about how they did it. There are so many classes, like Gnomon and others, out there too. There are also so many resources online for people just to pick up a copy of ZBrush and Maya and start building their digital human or their digital self-portrait.

You can also bring those characters into Unreal Engine. Even for us, as we were jumping into the engine, it was super helpful because it comes primed with digital human assets that you can already use. So you can immediately go from sculpting into the real-time version of that character.

The tricky part is some of the motion, but even there you can hook up your iPhone with ARKit to Unreal Engine. So much of this has been a democratization of the process, where somebody at home can now put up a realistically rendered talking head. Even five years ago, that would’ve taken us a long time to get to.

Where do you see digital humans evolving next?

Hendler: You’re going to see an explosion of virtual YouTube and Instagram celebrities. We see them already in a single frame, and soon, they will start to move and perform. You’ll have a live actor transforming into an artificial human, creature, or character delivering blogs. That’s the distillation of virtual production in finding this whole new avenue—content delivery.

We’re also starting to see a lot more discussion related to COVID-19 over how we capture people virtually. We’re already doing projects and can actually get a huge amount of the performance from a Zoom call. We’re also building autonomous human agents for more realistic meetings and all that kind of stuff.

What makes this work well is us working together with the actors and the actors understanding this. We’re building a tool for you to deliver your performance. When we do all these things right, and you’re able to perform as a digital character, that’s when it’s incredible.

Digital Domain

Matthias Wittman, VFX Supervisor @ Digital Domain, will also be part of the upcoming Real-Time Conference, Digital Human talks, co-hosted by fxguide’s Mike Seymour, (April 26/27). He will be presenting “Talking to Douglas, Creating an Autonomous Digital Human“. Also presenting will be Marc Petit, General Manager of Unreal Engine at Epic Games.

Digital Domain was also recently honored at the Advanced Imaging Society’s 11th annual awards for technical achievements. Masquerade 2.0, the company’s facial capture system, was recognized for its distinguished technical achievement. Masquerade generates high-quality moving 3D meshes of an actor’s facial performance from a helmet capture system (HMC). This data can then be transformed into a digital character’s face or their digital double, or a completely different digital person. With Masquerade, the actor is free to move around on set, interacting live with other actors to create a more natural performance. The images from the HMC worn by the actors are processed using machine learning into a high quality, per frame, moving mesh that contains the actor’s nuanced performance, complete with wrinkle detail, skin sliding and subtle eye motion, etc. We posted an in-depth story on Masquerade 2.0 in 2020.

Field Guide Vol II

The first volume of the Virtual Production Field Guide was released in July 2019, designed as a foundational roadmap for the industry as the adoption of virtual production techniques was poised to explode. Since then, a number of additional high-profile virtual productions have been completed, with new methodologies developed and tangible lessons ready to share with the industry. The second volume expands upon the first with over 100 pages of all-new content, covering a variety of virtual production workflows including remote collaboration, visualization, in-camera VFX, and animation.

This new volume of the Virtual Production Field Guide was put together by Noah Kadner who wrote the first volume in 2019. It features interviews with directors Jon Favreau and Rick Famuyiwa, Netflix’s Girish Balakrishnan and Christina Lee Storm, VFX supervisor Rob Legato, cinematographer Greig Fraser, Digital Domain’s Darren Hendler, DNEG’s George Murphy, Sony Pictures Imageworks’ Jerome Chen, ILM’s Andrew Jones, Richard Bluff, and Charmaine Chan, and many more.

As the guide comments, what really altered filmmaking and its relationship with virtual production was the worldwide pandemic. “Although the pandemic brought an undeniable level of loss to the world, it has also caused massive changes in how we interact and work together. Many of these changes will be felt in filmmaking forever.” Remote collaboration and using tools from the evolving virtual production toolbox went from a nice-to-have to a must-have for almost all filmmakers.  The Guide examines a variety of workflow scenarios, the impact of COVID-19 on production, and the growing ecosystem of virtual production service providers.

Click here to download the Virtual Production Field Guide as a PDF, or visit Epic’s Virtual Production Hub to learn more about how virtual production and the craft of filmmaking.

Share if you enjoyed this post!



Source link

ILM’s look behind the scenes of The Mandalorian’s Stagecraft


ILM  has posted a great video about the virtual production work on season two of The Mandalorian featuring interviews with filmmakers Jon Favreau, Dave Filoni, Deborah Chow, Bryce Dallas Howard, Peyton Reed, and Robert Rodriguez.

The video includes never-before-seen BTS clips from the making of season 2 and dives into StageCraft 2.0 and Helios, the ILM real-time cinematic render engine used to create the in-camera visual effects and environments in the volume for the series.

[embedded content]

Also: check out our fxguide story on The Mandalorian’s Virtual Production Volume here.

Share if you enjoyed this post!



Source link

Virtual Production at Stargate Studios


Sam Nicholson ASC and Stargate Studios have released a video outlining their work in shooting virtual productions. The test piece was shot by Jody Eldred, and it explores Stargate Studio’s recent test of their newest virtual production techniques using their ThruView process. The ThruView is an integrated system of kinetic lighting, mobile outside-in tracking, high-speed playback, maximum resolution LED screens, high frequency, and a cutting-edge camera system built around the Blackmagic URSA Mini Pro 12K.

Stargate previously used a similar setup for the virtual production on HBO’s series Run. That show required the characters to travel across the United States on a train, but production never intended to leave Toronto. “During preproduction, we proved that with multiple Blackmagic Design DeckLink 8K Pro cards we could stream 10 simultaneous streams of 8K footage to the Epic Unreal game engine for real-time playback on forty 4K monitors on a 150′ long train set.”

Utilizing a custom tracking and lighting tool developed in-house, the system was able to deliver a photorealistic moving image, displayed through the train windows, with animated lighting to match the plate, tracked to and composited with the shot in real-time. “We produced the entire series with ‘ThruView’.”

The new test uses the Blackmagic URSA Mini Pro 12K which was not available at the time of Run. The URSA Mini Pro 12K is an impressive digital film camera with a 12,288 x 6480 (12K) pixel Super 35 sensor and 14 stops of dynamic range, built into the URSA Mini body. The combination of 80 megapixels per frame, new color science, and the option of Blackmagic 12 bit RAW made the 12K camera appealing to Sam Nicholson and the Stargate team. The oversampling of the 12K gives virtual production teams excellent 8K and 4K images with correct skin tones and the detail of high-end still cameras. The URSA Mini Pro 12K can shoot at 60 fps in 12K, 120 fps in 8K, and up to 240 fps in 4K Super 16. URSA Mini Pro 12K features an interchangeable PL mount, as well as built-in ND filters, dual CFast and UHS-II SD card recorders, and a SuperSpeed USB-C expansion port.

Stargate has been working on VP for many years now

The 12K camera has a new different sensor where it has equal amounts of red, green, and blue pixels and not a typical Bayer Pattern with twice the green as blue and red pixels. This is significant as one of the issues facing virtual production is the frequency response of the LED panels and the quality of the light they emit. Many were not designed or built to be production lighting panels. Industry high-end LED production lights such as the Arri Skypanel, which a team like Stargate Studios might typically use, were designed to help faithfully reproduce skin tones in a studio when captured by a digital camera sensor. While virtual production LED walls are also built from roughly the same LED technology as professional studio lights at one level, they were not designed with the same precise frequency and spectral response. This makes Blackmagic’s improved color capture and management in the camera, along with matching tools such as Resolve, very important in achieving high-quality skin tone results.

Share if you enjoyed this post!



Source link

Vision getting ahead in WandaVision


WandaVision is a playful, mysterious, and action-packed series from Marvel on Disney+. “It picks up shortly after Avengers: Endgame. Tara DeMarco was the VFX supervisor tasked with producing for the small screen all of the visual effects and ‘synthezoid’ complexity seen in the major tentpole feature films. DeMarco is an award-winning VFX supervisor known for her cutting-edge work as a flame artist. Her 18-year career in visual effects is grounded in high-end commercial work, having composited on Emmy, Cannes Lion, and DA&D award-winning work. She had a 14-year run at The Mill, as well as freelancing with several other studios such as Method, Brickyard and Psyop.

Paul Bettany, portraited or was the basis of multiple versions of his Vision character in WandaVision, from a dead dismembered synthezoid to a new rebuilt white Vision, that fights,.. well Vision.

6 years ago Fxguide spoke to Trent Claus of LolaVFX when the company did the first Vision in Avengers: Age of Ultron, and it was this version of Vision that DeMarco and her team used as their ‘definitive Vision’. They also extensively referenced the closeups of Vision in the Avengers: Infinity War (2018).  “We knew we had to match the features, and that Vision had to have the same feel for black and white and for color,” explains DeMarco. “That being said, we had several vendors execute Vision in this show and we didn’t insist that they all follow the same methodology.”

In the Age of Ultron, Vision’s look was nearly entirely done by LolaVFX. For WandaVision, DeMarco had multiple vendors in the TV series. Lola VFX was joined by Digital Domain, ILM, Monsters, Aliens, Robots, & Zombies (MARZ), and others, all of whom worked on hero versions of Vision across the series in his various forms. “Each facility has artists with different strengths. So we insisted that key parts of Vision look the same, meaning the amount of the CGI that you see in the depth of his panels and the sheen of the skin or the sheen of the metal all had to be the same, but we let them work out their own methodologies to achieve that,” she adds.

Purple Vision (B/W vision)

Unlike in the original Vision approach, Paul Bettany did not wear a prosthetic headpiece for filming the B/W version of Vision. The actor himself requested to not have the headpiece fitted that would have covered his ears, so as to better be able to hear and react on the set. “MARZ did episodes one, two, and three, and they’re a wizard CG tracking facility,” DeMarco comments. “MARZ did that amazing mirror mask for Watchman (2020) and they did some tests for us.” Based on those tests MARs and Screen Scene VFX both came on as sitcom Vision vendors in addition to Lola VFX.

The first episode of WandaVision was filmed in front of a live studio audience and finished as a 4:3 black and white master. The production sort to make the early episodes as faithful as possible to the genre of early sitcoms, even the actual crew were in period costume on the sound stage set. Cinematographer Jess Hall used 47 different camera lenses for the seven different time periods covered in WandaVision, many of which were modern lenses custom-modified to keep characteristics of the actual period lenses. Lighting was adjusted to align with the periods being portrayed. Tungsten lights were used in filming the 1950s-1970s era episodes, as they were commonly used in production during that period, where LED lights were used for scenes depicting the modern era.

Since these early episodes featured black and white scenes with Paul Bettany looking like the modern-day Vision, the production settled on a purple makeup rather than the traditional Vision Red for the actor. This seemingly odd color choice looks more believably like the tones one would expect from Red. It was not uncommon back in the era of B/W television for such tricks to be used, even down to using blue or purple lipstick on female lead actresses instead of red.

Onset Bettany had a set of tracking dots that were used to track on the correct visual elements of Vision. MARZ used advanced machine learning tools to do the dot removal along with extensive compositing. The process was complex as DeMarco was highly focused on maintaining Bettany’s performance as closely as possible. This required complex 3D, digital makeup techniques and compositing.

Unlike in the feature films where Bettany had worn a prosthetic in addition to colored makeup, for WandaVision, there was nothing covering his ears or the top of his head. “We found that because we were replacing the prosthetic anyway, and because the final metal crown and the panels of Vision are slightly narrower than the prosthetic would have been, that there was no point,” explained DeMarco.  “Each vendor would be doing some paint cleanup of the background anyway. It was Paul’s desire to be more comfortable and to hear better onset.”  The production did compromise with a bald cap that had the same hue as his actual final skin color, and a tracking marker pattern based on what the artists would need to track later.

For these ‘classic b/w’ episodes references the production studied practical effects from the early days of visual effects in television and film. In the first three episodes, says DeMarco, “we used puppeteered props, practical film cuts, and rewind effects.”  While filmmakers leaned into the effects used during the era that inspired the episodes. “We used contemporary technology to help remove the wires and smooth the cuts,” says DeMarco, “but many of the effects were shot in-camera. We occasionally used CG to bolster the storytelling in a beat where we were missing a wire gag. For example, Wanda’s kitchen in the first episode is a blend of practical puppeteered floating objects and CG ones created later to fill out the scene.” DeMarco had done numerous SuperBowl commercials prior to WandaVision, and even with the huge budgets associated with such key spots, “we wouldn’t use wire work all the time because you just don’t have time to create every object in CG,” she comments. “If you know that you’re doing on something with a short turnaround, we might make a prop ahead of time and then have someone hanging it on wires for a character to grab or have contact with… I just never thought I would be doing that with Marvel,” she laughs.

Red Vision

The color episodes of WandaVision meant Bettany returning to the red makeup for the later episodes. In all cases, the production was exacting in making sure that anything that was replaced with CG matched the plate performance of Bettany. “Sometimes it meant going back to his original eyelids and then beauty paint smoothing of the skin underneath, – but keeping the makeup. And sometimes it meant replacing sections with a CG panel. It really depended shot to shot on the execution,” DeMarco comments. “We had a second supervisor on the show, Sarah Elm, who was full-time on Vision for a very long time.  Sarah was intimately familiar with all of the parts of the face and what needed to be preserved or where we might want to maintain a specular highlight from the makeup and which parts get fully replaced.” Generally speaking, the VFX crews kept Bettany’s eyes, his nose, and mouth – and replaced pretty much everything else.

One interesting aspect is Vision’s eyes. In line with the original design, Vision has complex digital radial graphics in his actual eyes. This required in many scenes digital contact lenses to be composited into Paul Bettany’s eyes. While Bettany normally wears glasses, he did not have any actual contact lens all of Vision’s eyes were done as visual effects composites.

Paul Bettany did do a FACS session and the effects team accessed prior data scanning of the actor. One of the advantages of the Marvel effects producing team is its effective archiving and data management that allows both key data such as facial scanning to be accessible but also sensibly shared between different facilities.

The complexity of Vision’s ‘skin’ is highlighted by its need to match the actor’s skin folds and wrinkles in making sure the actor’s facial expression is accurately mapped to the final synthezoid face. The material of Visions surface skin can move but it needs to never look like human skin with red makeup on its surface. This means the face needs to contract and stretch but without having pores or wrinkles.

White Vision

White Vision features prominently in the final battle episode. Digital Domain’s digital double team travelled to Atlanta to help supervise the elaborate final battle. “Digital domain did digital human versions of both red Vision and white Vision,” says DeMarco. “They took the model from the last film and updated it and gave it some more modern fidelity for red Vision.” The base data was meant to be the same for white vision, but it turned out to be virtually a whole new character. “The design is quite different for what is happening in the panels on his head and also in the costume.” Even with the “great digi-doubles” that were created, “Paul has so much in his performance that we needed to preserve, absolutely whenever possible.”  When white Vision faces off with red Vision, the production did film everything with him twice, once for red and once for white.

The base Vision data was re-purposed from an existing scan of the actor, with the addition of the FACS session. Most of the significant actors in the show were scanned by the production as standard. Digital Domain set up their own scanning station in Atlanta and did their own additional Vision scans to get the skin texture and fine detail data they needed.  Digital Domain’s VFX supervisor was Marion Spates, and the Digital DFX was R. Matt Smith. This final Digital Domain Vision was then shared with all the vendors but without Digital Domain’s custom rigging which, like nearly all facilities, is highly proprietary. For example, ILM and RodeoFX both had access to the Vision data to work on the synthezoid’s near destruction by the Hex, as he attempts to leave the town.

The digital double of both red Vision and white Vision were required for the flying/fighting scenes in the last episode of the season, and the stand-off in the library.

The Third Floor was behind with the previs, techvis, and postvis work on WandaVision. Patrick Haskew was The Third Floor’s visualization supervisor. The Third Floor has extensive experience with working within the MCU, having done a vast amount of previous Marvel projects, and the company prides itself on providing far more than just simple Previs. The Third Floor was instrumental in helping us figure out what we needed to film practically what we absolutely had to have with an actor onset, and what we could do later in CG,” complements DeMarco. Thanks to the complex pre-production, the director had a very strong idea on set what he wanted to achieve, and the DP knew in advance how the lighting needed to match the particular narrative at that point in the series.

DeMarco comments that it was “nice to have something to match from the previous films for Vision. There are so many looks in the show that we developed fresh. The sitcom look and Hex look and Vision’s coming apart and reforming on the operating table. With all the new things we had to establish, it was lovely to have really great references to match for some of Vision’s LookDev.”

Degrading and Disintegrating Vision

Rodeo FX took on another challenge, that of having Vision start to disintegrate when he tries to leave Wanda’s illusion and pass through the town’s Hex barrier.

Rodeo FX used a range of 2D and 3D passes to build up the complex visuals that denote Vision degrading outside the barrier. Rodeo FX was given great creative freedom to explore a large number of styles by DeMarco. Chief amongst the challenges was to not make Vision’s disintegration look like the ‘Snap’ from Avengers nor cross over into looks generated for the metaverse in Antman or the magical mystery ride visuals of Dr. Strange, or any of the other complex forcefields, magic, weaponry shields seen in other films such as Guardians or Black Panther. To be honest, the main goal was to not look like any of the previous movies,” comments Rodeo’s VFX supervisor Julien Héry with a smile.

Pixel sorting is the process of isolating a horizontal or vertical line of pixels in an image and sorting their positions based on any number of criteria. For instance, pixels’ positions may be sorted by each pixel’s luminosity, hue or saturation. The resulting streaking effect was found to produce an important look that was similar to video tearing from VHS days but in a fresh and modern way. The pixel sorting was key but only one part of a highly complex visual language that Rodeo’s Héry had to develop and use to tell this most unusual of Marvel stories. Developing the unique ‘television’ signature of the Hex and how it interacted with Vision and characters such as Monica Rambeau, took Rodeo nine months. The company contributed to eight of the show’s nine episodes.

One of the issues was turning very flat or 2D visual effects such as pixel sorting into a 3D effect that could be seen to be pulling at the solid form of Vision. “Everything was started in Houdini but then it was a lot of custom build effects base on the concept art, and the mood board we prepared,” explains Héry. “We had to engineer the 3D effects into more of a volumetric solution, so we had perspective and depth on all those lines.” The net result was a narrative impression of almost breaking a fluid’s surface tension.

As with the other vendors, Rodeo tried to preserve as much as possible of Paul’s performance as possible, “whenever you see his face, it is his lips, his eyes, and what we did is a lot of cleanup work to make it seem very smooth and then adding all the CG panels,” Héry explains. Unlike other vendors, Rodeo had to disintegrate Vision.

The main breakup of Vision’s body has strips coming away, like giant pixels – fracturing the surface of Vision and leaving a smoldering, almost burning edge. “There is almost an ash, as much as we tried to stay away from fire and ashes (to avoid the ‘Snap’ look), and this is combined with a smearing layer, and a moire pattern in the barrier itself,” Héry outlines. As many of the Rodeo team are talented young artists, not all of them had extensive experience with analog broadcast equipment so Héry got a giant magnet and physically played with a CRT or tube television to allow all the team to see for themselves some of the reference effects and chromatic distortions from the old days of television referenced by WandaVision.  This sense of a magnetic attraction then influenced the look of Vision pulling away from the void, “almost like a wind tunnel effect, but the trick was for this to actually drive the smearing layer in comp in Nuke,” he adds.

The Hex barrier was highly complex as it needed to be visually solved differently for Vision’s attempted breakout compared to Monica Rambeau, who breaks into the town. Teyonah Parris plays Monica Rambeau, who was introduced to audiences in Captain Marvel. In WandaVision she inhabits the town in several of the sitcom periods and as such when she breaks back into the town Rodeo FX had to represent all of these states in a barrier that had depth or thickness not visually seen in the Vision shots. Whereas Vision was of the Hex, Rambeau needs to seem like she is being enveloped by the Hex as a volume.  Héry’s logic for this sequence was therefore somewhat more agile than for Vision. “When we started this sequence, we did not know creatively where we were going to finish, so we wanted it to be primarily a compositing solution that was as procedural as possible so it could be flexible and quick to change,” he explains.

The raw plate started in Nuke for greenscreen extraction, the clip then went into Houdini to produce the various layers of elements triggered by the live action. These multiple layers were then compiled together in Nuke, and the smearing between each of the different Monicas was performed. Finally, the shot was transferred into Flame “where we built the whole environment and finished the shot”, Héry explains.

Rodeofx completed 348 shots, over 17 sequences with a 343 crew. The work was both technically complex and visually challenging, involving extensive experimentation to produce the original imagery that is both connected to the main MCU and yet nothing like anything that has been done before.

Many solutions were explored, including perhaps a more literal LiDAR or Point cloud approach for the barrier, but while it looked “really cool against a black background,” says Héry. “Against a more colorful environment you kind of lose what made it look cool, and we were meant to have a very bright and colorful background world. We started with point clouds and volumetrics, but you could not really tell what you were looking at in the end, so it was missing out on important storytelling, so we did something very different”

Away from Vision:

Filming began in Atlanta, Georgia in November 2019. In March 2020, production was halted due to the COVID-19 pandemic. Character scanning and LiDAR was done by SCANable. The company performed 3D scans of sets, actors, props, and other items for the series. Digital asset management handled by 5th Kind. 3D scanning was done by Gentle Giant Studios.

Major additional visual effects work was provided throughout the season, in addition to the companies above, by companies such as Rise Fx, Luma, Mr. X, Capital T, Weta Digital, Cantina Creative, and The Yard VFX who all did extensive work. These companies did VFX across the show, especially major sequences without Vision such as Wanda’s fight, the Witchcraft, genre transitions, and slow speed Quicksliver effects.

Share if you enjoyed this post!



Source link

The Rise of Real-Time Digital Humans: Pulse Panel.


This week fxguide’s Mike Seymour hosted an Epic Games panel discussion on the rise of real-time digital humans. In addition to fxguide, Mike Seymour is a longtime researcher and writer on digital humans. Mike was joined by Jerome Chen of Sony Pictures Imageworks, Amy Hennig of Skydance Media, Isaac Bratzel of Brud (makers of virtual influencer Lil Miquela), and Vladimir Mastilović of 3Lateral  which is now part of Epic Games.

[embedded content]

In addition to the main panel, there was a great discussion that could not make it into the main event on believable eyes.

[embedded content]

This episode of the Pulse was one of the most popular so far and watched around the world.

fxguide will be following up with more deep-dive content as the Open Beta of the Epic MetaHuman Creator approaches release.

Share if you enjoyed this post!



Source link

Oscars Nominees 2021


Among the Oscar nominees in 2021, MANK received the most nominations with 10, including nominations in the categories of Best Picture, Actor in a Leading Role, and Directing. Other Oscars nominees 2021 with multiple nominations include THE FATHER, JUDAS AND THE BLACK MESSIAH, MINARI, NOMADLAND, SOUND OF METAL and THE TRIAL OF THE CHICAGO 7 with six nominations each.

Our heartfelt congrats to the nominees and all the teams of artists who worked on these great films.

Visual Effects

LOVE AND MONSTERS
Matt Sloan, Genevieve Camilleri, Matt Everitt and Brian Cox

THE MIDNIGHT SKY
Matthew Kasmir, Christopher Lawrence, Max Solomon and David Watkins

MULAN
Sean Faden, Anders Langlands, Seth Maury and Steve Ingram

THE ONE AND ONLY IVAN
Nick Davis, Greg Fisher, Ben Jones and Santiago Colomo Martinez

TENET
Andrew Jackson, David Lee, Andrew Lockley and Scott Fisher

Animated Feature Film

ONWARD

Dan Scanlon and Kori Rae

OVER THE MOON
Glen Keane, Gennie Rim and Peilin Chou

A SHAUN THE SHEEP MOVIE: FARMAGEDDON
Richard Phelan, Will Becher and Paul Kewley

SOUL
Pete Docter and Dana Murray

WOLFWALKERS
Tomm Moore, Ross Stewart, Paul Young and Stéphan Roelants

Short Film (Animated)

BURROW
Madeline Sharafian and Michael Capbarat

GENIUS LOCI
Adrien Mérigeau and Amaury Ovise

IF ANYTHING HAPPENS I LOVE YOU
Will McCormack and Michael Govier

OPERA
Erick Oh

YES-PEOPLE
Gísli Darri Halldórsson and Arnar Gunnarsson

Cinematography

JUDAS AND THE BLACK MESSIAH
Sean Bobbitt

MANK
Erik Messerschmidt

NEWS OF THE WORLD
Dariusz Wolski

NOMADLAND
Joshua James Richards

THE TRIAL OF THE CHICAGO 7
Phedon Papamichael

The Oscars 2021 will air Live Sunday APRIL 25 8e|5p and will be televised live in more than 225 countries and territories worldwide.

Share if you enjoyed this post!



Source link

Mank’s Monochrome Effects


Mank is nominated for the VES award for Outstanding Supporting Visual Effects in a Photoreal Feature. The film follows screenwriter Herman J. Mankiewicz’s tumultuous development of Orson Welles’ iconic masterpiece Citizen Kane (1941).

There were several VFX supervisors nominated, Simon Carr (Territory Studio), Wei Zheng (Artemple), James Pastorius (Savage VFX), along with Peter Mavromates. In many respects, director David Fincher could have also been nominated for VFX. The director is himself an expert in visual effects and was a very active contributor to the film’s effect work. Peter Mavromates is a long-time collaborator with David Fincher and was officially the Co-producer, Post Supervisor and VFX Producer on the film. Additionally, Pablo Helman at ILM was key in creating the CG animals at the San Simeon zoo.

The film had its roots going back over 20 years. “We had a false start about 20 years ago, around 1999. The script had been written at that time but it never happened for a number of reasons,”  Mavromates comments. “Probably a contributing factor was that it was black and white and if you weren’t Woody Allen in the 90s, you couldn’t shoot black and white. Even Mel Brooks had to change producers for Young Frankenstein because the studio wouldn’t let him shoot black and white and he had to find another studio.”

The movie finished filming about 2 weeks before the W.H.O. declared the COVID pandemic in Feb 2020. This meant nearly all the VFX was done using remote protocols at each of the VFX vendors.

 LED Screens

Simon Carr of Territory Studio headed the team that recreated a section of Wilshire Blvd using a combination of rear projection and LEDs panels. Their work was some of the only VFX to be primarily completed before the pandemic.

[embedded content]

The Wilshire Blvd sequence required the recreation of the famous road in Hollywood. As the film was almost entirely shot in B/W on the RED Ranger Helium Monochrome camera, there was no possibility of green or blue screen for the VFX outside the windows of the car when the actors were filmed in the studio. The impossibility of using a green screen reinforced the advantage of shooting the driving scene ‘old school’ with a ‘backlit/rear projection’ approach with LED panels.

The external imagery was not done as a live dynamic LED stage setup. The exterior did not dynamically update to any camera movement. The style was much more in the historical look of the film’s period era but with the imagery generated digitally ahead of time. “When they’re driving to the beach down Wilshire Boulevard, it was actually a fully CG-created environment. It was really dealt with as sort of old-fashioned rear projection,” comments Mavromates.

Locations

Wei Zheng of Artemple headed the team that was responsible for many of the extended locations and sets. Mavromates describes Zheng as Fincher’s favorite digital matte painter, and thus Zheng did all the major matte painting and projection environments in the film.  Artemple did detailed work to bring the filming locations back to the 1930s era and into the golden age of Hollywood. This was the sixth collaboration for Zheng with the director. Commenting on Deadline.com Zheng stated that “I’m pretty lucky that since Zodiac, I’ve had the opportunity to work for David (Fincher) and Peter (Mavromates), and I know what David’s looking for, in terms of aesthetic.”

[embedded content]

Skies

Savage VFX did the complex and flexible sky replacement through an unusually long San Simeon sequence. The company used the real-time Unreal game engine to produce the shots. Savage added clouds to the sequence where Mank meets Hurst for the first time. Instead of doing a series of matte paintings in post to match the edit, the team decided to make a 360 degree CG cloud environment with Houdini and UE4. This was used from the earliest stages of lookdev to the final comps. For the early staging and framing, an entire virtual set with all the props and set pieces was used to visualize the shots. It was then possible for the clouds to be placed consistently and be crafted to the very distinct style of the film.  Not only did the UE4 environment place the clouds but the setup matched all the cameras, framing, aperture, time of day, and focal length accurately. After the cloud shots were approved, the UE4 setup was exported to Houdini for rendering and composited in Nuke with the addition of lens flares and any additional elements.

[embedded content]

Cinematography

Mank was shot by DOP Erik Messerschmidt, who photographed the film using the Red Monochrome cameras as a native black and white film with a period-appropriate deep focus. His work distilled the visual storytelling to lighting with shadow and texture, producing a film that looks modern but respects the filmmaking era of the story.

The film was shot on Red Ranger Helium Monochrome cameras primarily. A lot of time was spent in pre-production coming up with the final look and this involved a signature look of blooming the blacks on screen, a common film artifact from the early days of cinema. “Erik worked to understand the formulas of when this appeared and didn’t appear in each lighting situation and then he married that with a series of grain tests to come up with grain structures that worked with those blooming blacks,” explains Mavromates. “There are also some in-camera fades, and a small handful of in-camera f-stop pulls, so that the depth of field changes during a shot. It is very subtle, but these were things that Gregg Toland (DOP) did in the actual Citizen Kane movie.”

[embedded content]

Earlier in his career, director David Fincher prepared by creating highly detailed technical pre-viz, but for some time Fincher has moved away from creating that style of previz. Mavromates was the post-production supervisor on Panic Room and he points out that Fincher has done “very little Previz since Panic Room (2002).  He likes to shoot a movie with VFX – like he would shoot it if it had no VFX.” He locked onto that methodology for The Curious Case of Benjamin Button (2008), because he didn’t want it to feel like any shot looked like it was filmed to serve the visual effects in the film… “and this is partly because the technology has come such a long way. And so now there is a bit of a freestyle nature to the way that we film things.”

ILM did get a small set of VFX shots for Mank for the digital animals. For one sequence where a monkey case and digital monkeys were going to be in shot, “David put a lit up white panel behind the stone iron fence where he wanted the monkey cage to go.” says Mavromates. There was not use putting up green screen as the sequence was shot in black and white, but Fincher and Mavromates wanted to give ILM some help in isolating the foreground.

Plate photography shot to help ILM with the Monkeys and done this way as greenscreen was not possible
Final shot

ILM did 7 creature shots with digital monkeys, giraffes, and elephants. Savage VFX did 233 shots, and Artemple did 72 shots, which was almost all digital matte paintings. Territory Studios in the UK did the Wilshire Blvd work that ended up as 7 shots in the film, “but the amount of work they did is far larger than that number indicates.” Ollin VFX in Mexico did 228 shots and “I have a company called Outback Post, who I call my ‘optical’ company,” comments Mavromates. Outback did many of the transitions, adjustments, split screens, and what would have been optical effects in the film age of cinema but today are actually much more complex than anything done in the optical days of film. “I would say starting with The Girl with the Dragon Tattoo, we have not been shy about saying, ‘hey I know, that’s a handheld shot, but we’re going to do a split-screen and I’m putting these two performances together and, you know, Outback get it done.”

The film’s VFX and post were DI mastered at 6K from which the master is extracted. This extra resolution allows the filmmakers to do some blowup or repositioning work right up until the end of post-production. The Red cameras source footage at 8K is twice the resolution of Netflix’s final deliverable. The final 4K master is done as the very last process.

Mavromates also ran a small VFX crew in-house because David Fincher “knows this technology and is he’s not shy about using it all the way until the very last day when we can’t work on it anymore.”  Mavromates recalls on a previous film that in the very last days of the final grading, Fincher decided that the position of a light switch on the wall was slightly distracting and he had his team “move it digitally over about a foot.” Adding, “the cumulative effect of all of that is just a high level of control that does actually end up on the screen and does make a difference.”

Share if you enjoyed this post!



Source link