Arnold GPU on NVIDIA RTX


RTX GPU Arnold

Autodesk has announced the release of Arnold 6 with Arnold GPU RTX rendering to provide powerful new levels of responsiveness. The latest releases of Maya 2020 and Arnold 6 are shipping today, December 10. They contain new features such as RTX-accelerated ray tracing and AI-powered denoising.

Autodesk built Arnold GPU based on NVIDIA’s OptiX framework to take advantage of NVIDIA RTX’s RT Cores for dedicated ray tracing and Tensor Cores for AI denoising.

NVIDIA is also releasing a new Studio Driver to support these new updates to Arnold and Maya RTX interactivity and acceleration in final frame renders.

Maya Arnold 6 GPU render, courtesy of Lee Griggs

“We’ve worked closely with NVIDIA to optimize Arnold GPU to run on the latest RTX GPUs and RTX Server, and we’re excited to get this latest update into the hands of new and existing Arnold customers,” said Chris Vienneau, senior director of Media & Entertainment Products at Autodesk.

Arnold 6 GPU render from Maya showing denoise

With the new updates, rendering with NVIDIA RTX GPUs is multiple times faster than a typical dual-CPU rendering server. “Speed and interactivity have become more crucial than ever to the creative process,” said Vienneau. “Arnold 6 delivers performance gains that will help lighten the load with the same high-quality render results that the CPU renderer is known for.”

Autodesk Arnold 6 new features include: 

  • A unified renderer that allows users to switch seamlessly between CPU and GPU rendering.
  • Support for OSL, OpenVDB volumes, on-demand texture loading, most LPEs, lights, shaders, and all cameras.
  • New USD components like hydra render delegate, Arnold USD procedural, and USD schemas for Arnold nodes and properties, which are now available on GitHub.
  • Several performance improvements to help maximize efficiency, including faster-creased subdivisions, an improved Physical Sky shader, and dielectric microfacet multiple scattering.
  • Bifrost for Maya: Significant performance improvements, Cached Playback support, and new MPM cloth constraints.
Arnold 6 GPU render from Maya using OpenVDB volumes.

Maya 2020

Autodesk Maya 2020, is also now also available with new GPU-accelerated features:

  • GPU caching of ncloth and nparticles enables smooth, real-time playback of animations without the need for playblasts or skipped frames.
  • New Proximity Wrap deformer joins a family of GPU-accelerated deformers to make it simpler to model deformations in materials such as cloth and muscle systems.

Arnold GPU is available to try with a free 30-day trial of Arnold 6. Arnold GPU is available in all supported plug-ins for Autodesk Maya, Autodesk 3ds Max, Houdini, Cinema 4D, and Katana. For more information

Arnold 6 GPU rendering with OSL Shaders

Share if you enjoyed this post!



Source link

Apple and Microsoft Join the Academy Software Foundation


The Academy Software Foundation, a collaborative effort to advance open source software development in the motion picture and media industries, today announced that Apple and Microsoft have joined the Foundation as Premier members.

David Morin Executive Director or ASWF

“Filmmakers everywhere use Apple products. We are delighted to welcome Apple as a new member, and we look forward to working with them to ensure that our open source projects run well on Apple software platforms,” said David Morin, Executive Director of the Academy Software Foundation. “We are also pleased to welcome Microsoft to the Academy Software Foundation. Their membership helps us hit a significant milestone as we surpass $1M in annual funding, a solid financial base that we will use to support our open source projects, the software engineers that develop them, and the open-source community in general.”

Studios and vendors across the industry have come together to support of the Academy Software Foundation, launched in August 2018 by the Academy of Motion Picture Arts and Sciences and the Linux Foundation.  The Academy Software Foundation provides a neutral forum for open source software developers to share resources and collaborate on technologies for image creation, visual effects, animation, and sound.

Apple’s Tim Cook at the Mac Pro Launch

“To support the continued growth of open source software across our industry, we have the privilege of providing developers with tools that make it easier to contribute code and participate in the community,” said Rob Bredow, Executive Creative Director and Head of Industrial Light & Magic, and Governing Board Chair of Academy Software Foundation. “One of these tools is the Academy Software Foundation’s Continuous Integration (CI) build infrastructure, which streamlines development for build and runtime environments. With Apple as a new member, we hope to work with them to improve support for Apple platforms, which will continue to democratize open source software development.”

As a new Foundation member, Microsoft is committing to dedicating engineering resources to support Foundation-hosted projects and will assume roles on the Academy Software Foundation Governing Board and on its Technical Advisory Council (TAC).

“At Microsoft, our mission is to empower every person and every organization on the planet to achieve more, and it’s this mission that drives our commitment to open source,” said Tad Brockway, corporate vice president, Azure Storage, Media and Edge, Microsoft Corp. “We’re excited to become a member of the Academy Software Foundation and work together with the industry’s open source community to bring the latest cloud technologies to the Foundation and its projects.”

Share if you enjoyed this post!



Source link

Olivier Orand’s music video: Thursday Night.


Director Kays Al-Atrakchi created the VFX for French EMD artist Olivier Orand’s new music video, Thursday Night. Orand is best known for his 2018 Electronica album Human.

The Thursday Night music video requiring more than 260 VFX shots. Orand appears only briefly in the video at the very end, the film centres around the performance of actor Augie Duke. The clip is extremely visual and fast-paced, but shot in a very moderate studio. Al-Atrakchi and the team used Houdini,  Fusion Studio and DaVinci Resolve Studio for the majority of the video’s VFX, editing and grading.

[embedded content]

The entire spot was pre-vized and planned out with early simple renders. This included early Blackmagic Pocket 4K cameras tests and test composites.

Lead Actress Augie Duke

The green screen shoot, live action for Thursday Night was filmed on the BlackMagic Pocket Cinema Camera 4K, by DOP Steven Strobel. The piece was fully written, directed and produced by Kays Al-Atrakchi.

[embedded content]

The set was based on Sci-Fi Kit bash panels by Oleg Ushenok. The texturing was done in Substance Painter, in sync with Affinity Photo. These kit bash pieces were incorporated with assets from Black Chilla Studios and also used Assets and textures from The French Monkey. The use of these asset bash kits and online textures allowed for fast assembling of the futuristic environments. The main goal was to allow the team to easily build scenes without spending hours and hours on the basic environment, so they could polish the piece and focus on details and finishes. The Production Design and Costumes were by Krystyna Łoboda.

Derek Drouin edited the piece.  The final Edit, final color grade and mastering done in Blackmagic Resolve Studio 16. The music video highlights how far the Blackmagic tools have come, especially in the area of editing and integration of visual effects.

The team used the Video CoPilot Motion Design tools for the film clip’s visual UI. The team extensive researched and built a library of reference material to inform the design and the graphical language of the film and its overlays. CGI Modeling, Simulation was done in SideFX Houdini. The 3D scenes and environments were rendered using the Redshift Renderer. The compositing was all done in Blackmagic Fusion 16.

Share if you enjoyed this post!



Source link

Napster Co-founder Invests in Weta Digital


Sean Parker
DFree / Shutterstock.com

Weta Digital, one of the world’s premier visual effects companies, has a new partner in entrepreneurial  Sean Parker. Parker has made a significant investment in the company known for its culture of creativity and innovation. From Gollum to Caesar, Middle earth to Pandora, Weta Digital has created some of the most memorable characters and worlds of the last twenty-five years.

“I’ve long admired Peter Jackson’s and Fran Walsh’s work, and the ground-breaking VFX and animation that Weta Digital has created over the last two decades. The visionary leadership, imagination, and technical expertise of Weta Digital was vital to the creation of Academy Award-winning films such as Avatar, King Kong and Lord of the Rings. I look forward to helping grow Weta Digital and I’m excited to partner with Peter, the leadership of Weta, and its incredibly talented team.”


Fran Walsh, Peter Jackson, & Philippa Boyes at the76th Oscars in  2004.  Featureflash Photo Agency / Shutterstock.com

“Sean Parker brings an invaluable expertise that will fortify Weta Digital from a technological perspective, while also focusing on its growth as an industry leader,” said Jackson. “As I have gotten to know him, I have been extremely impressed with his curiosity, intelligence and passion.”


Sean Parker is an entrepreneur with a record of launching genre-defining companies and organizations. Mr. Parker was the co-founder of Napster at age 19 and of Plaxo at 21. In 2004 he joined with Mark Zuckerberg to develop the online social network Facebook and served as Facebook’s founding president, and in 2007 he co-founded Causes on Facebook, which registered 180 million people to donate money and take action around social issues.

He is also the founder and President of the Parker Foundation, which focuses on three areas: Life Sciences, Global Public Health and Civic Engagement. In April 2016, the Parker Foundation announced a $250 million grant to form the Parker Institute for Cancer Immunotherapy, which builds on Parker’s role in funding and promoting research into the relationship between the immune system and cancer. Parker serves on the boards of the Obama Foundation, The Museum of Contemporary Art (MOCA), and Global Citizen.

Weta Digital in Wellington, New Zealand is led by Multi-Oscar winning Senior Visual Effects Supervisor Joe Letteri. Weta Digital is known for incredible creativity and commitment to developing innovative technology. Their groundbreaking performance-driven animated characters like Gollum, Kong, Neytiri, and Caesar are widely regarded as some of the best character animation ever put on screen. Weta’s development of the revolutionary virtual production workflow for Avatar led the industry in integrated VP techniques into modern filmmaking.

Gemini Man – VFX from Weta Digital

Weta Digital’s artists have six visual effects Academy Awards, ten Academy Sci-Tech Awards and six visual effects BAFTA Awards. Recent projects include Alita: Battle Angel, Mortal Engines, Avengers: Infinity Wars, Avengers: End Game; Game of Thrones and Umbrella Academy, in addition to upcoming 2019 projects Ad Astra, Lady and the Tramp, and Gemini Man.

Share if you enjoyed this post!



Source link

How Virtual Production Worked On-set of the Lion King


The Lion King

Disney’s The Lion King aimed to solve a classic VFX/animation problem, namely how to direct a story when the director can’t see all of the things that they are directing. With the traditional, iterative approach it can be frustrating for a director to direct a scene when he or she is only directing one component at a time. With the Lion King, the creative team sort to take advantage of the revolution in consumer-grade virtual reality technology. Coupled with game engine technology, they advanced the art of virtual production and produced visual effects and animation combined with a traditional physical production approach. They succeeded in creating a system that allowed Jon Favreau to direct a movie with high quality, real-time, interactive components, – ‘shot’ in context – while still making a completely computer-generated film.

Jon Favreau speaking at the UE4 User group at SIGGRAPH in LA

The Lion King is a technical marvel. The quality of the character animation, the look of the final rendered imagery and the innovation in making the film are just as impressive as the incredible box office success of the film. MPC was the visual effects and animation company that provided the stunning visuals. They worked hand in hand with the creative team, especially director Jon Favreau, DoP: Caleb Deschanel, visual effects supervisor: Rob Legato and the team at Magnopus, headed by Ben Grossmann, to innovate the art of virtual production.

Oscar-winner Andy Jones headed the Animation team at MPC, with Adam Valdez as the Visual Effects Supervisor.  Oliver Winwood was the CG Supervisor but he started as the FX supervisor. Julien Bolbach was MPC’s first CG Supervisor working on sequence work, which was the Buffalo Stampede sequence.

A new way to work

Magnopus got involved with the Lion King project just before October of 2016, which was right around the time the idea came up to do the Lion King.  The project was always intended to improve upon the lessons learned from making Disney’s The Jungle Book. Grossmann explains. “We started by sketching out ideas around throwing out all the old visual effects based software and switching completely over to game engines. We then needed to figure out what we would have to write in order to shoot the entire movie in VR and integrated this with a major visual effects pipeline”.

John Oliver as Zazu, and JD McCrary as Young Simba,

The original D23 sizzle reel, which was the first footage seen of the Lion King, was actually shot in the offices of Magnopus with the first prototype of their VR Virtual Production System or VPS. Magnopus is made up of both Oscar-winning visual effects artists and VR specialists. The company has made such landmark VR projects as CocoVR with Disney Pixar and the Mission: ISS in collaboration with NASA, which allows users to explore the International Space Station in VR.

While the team built on their experience with The Jungle Book, the backbone of that earlier film was Autodesk’s Motion Builder. That technical approach had its roots in virtual production work that Rob Legato had pioneered for Avatar and The Aviator. “When we finished The Jungle Book we said ‘we now have all this new technology that we could take advantage of’ (for The Lion King).  Actually the technology had been there but it was never at a level that we could actually use”, Rob Legato commented at SIGGRAPH recently. “I had actually looked at the idea of using a VR headset on Avatar, but it was so crude. At that stage, it was not really ready for prime time”.

All the footage in the D23 reel was eventually replaced or updated, but even for those first few test shots, the team was operating virtual cranes on the herds of animals in real-time.  Some of the animals in the D23 reel were assets that were reused from Jungle Book before the full Lion King assets were ready. This is why some keener viewers could spot Asian Elephants in the background, not the correct African Elephants that would be used later. Given how early this reel was animated and rendered by MPC, the D23 reel is jaw-droppingly good and it fed into huge anticipation for the film’s release.

Interestingly, there was a technical bump or glitch in the Unity Engine. A feature that no user would normally have any control over, that caused a bump that made it all the way from the D23 footage to the final film. Every once in awhile the Unity Engine clears our any unneeded data or assets from the Engine. Unfortunately, when this random function, deep in the code called ‘garbage collection’ ran it could cause a tiny pause in the smooth movement of the master Unity Camera. One such ‘bump’ happens in a shot in the D23 trailer. After the trailer was complete, the team discovered what the issue was and fixed it. But even when the shot was redone much later, this ‘bump’ is still in the final camera move, just because Rob Legato liked the feel of the recorded natural move, even with the bump. For the creative team, ironically given its causes, it just felt natural.

The humanity of imperfect moves and imprecise framing provides the film with a live-action aesthetic. “The reason to do it the way we did”, comments Legato. “Is that that is how it has been done for a hundred years. You can’t improve on the filmmaking process”. Legato himself is almost as much a cinematographer as he is a VFX supervisor or second unit director.  “As much as you’d like to, and as archaic and bizarre as it seems: actors on sets rehearsing with the cameraman, plus the crew and all this stuff, – works. Film making is a collaborative art form. You need all the collaborators to help you make a movie…and without this sort of methodology of working, it all becomes a little more stifling and more sterile. It just doesn’t have this extra real life that comes from you continually changing and altering to what you see as you film it”, he clarifies. Legato is passionate about both collaboration and respecting the roles that have been honed over decades of filmmaking by professional storytellers.

Rob Legato

The new Lion King

From the outset, Jon Favreau had stated that he did not just want to remake a computer-animated version of the original animated classic. He believed one of the reasons why the Broadway musical version of The Lion King worked so well was that it was the same story but in a different context. It was important that it was a different presentation.

From the outset, the plan was to be faithful to the original story and not re-write the narrative, and yet do something to make the new film feel different. The team had been very pleased with the visual realism that MPC had delivered on The Jungle Book and so the team started work on envisioning a way to make a live-action production model for a completely computer-generated film. Unlike The Jungle Book, there would be no live actors or animals filmed. “It wouldn’t feel like an animated film. It would feel like something else,” Grossmann recalls. “In the very first meeting that we had with Jon, we all discussed that we couldn’t just do a knock off. It had to feel like something else. We needed to reach further into our toolkit to bring every technique to bear to make this film feel like a live-action movie”.

The second major issue to address was the balance of realism when the animals were going to be talking and singing. This had been already been to be addressed in The Jungle Book, but for The Lion King, the team decided to make the animals even more realistic than the previous film. King Louis and a few of the other animals in The Jungle Book were quite stylized and ‘humanized’. Naturally, it is easier to anthropomorphize a primate or orangutang than a lion.  For the new film, the team decided to tweak the artistic vision or look of the animals. Oscar-winning animation director Andy Jones at MPC was once again in charge of the team that would deliver these even more nuanced animation performances. Jones and his team at MPC refined their animation approaches and character rigging to deliver an even more subtle set of speaking animals.

Ben Grossmann, Magnopus.

Unity

The difficulty of the task of building the elaborate system that would allow the filmmakers to film a major Disney film in VR is not to be underestimated. The process relied on game technology to provide real-time performance. But when the Magnopus team started work in building their system in Unity, the high-performance engine did not even have a timeline. “At the time that we started, there was no Unity timeline and so we had to go in and we build a set of time-based functions”, recalls Grossmann. To do this the Magnopus team had to heavily modify the code and write themselves something that was effectively going to become the beginnings of a timeline in Unity.

The team did not run Unity complied, which was an unexpected decision. Magnopus didn’t write any executables and compile them. “We modified Unity’s editor mode to have the functionality that we needed so that we were essentially always shooting the movie in editor mode,” explained Grossmann. “It was a weird thing and not many people could figure out why we would do it this way, …but it was awesome”. This decision was related to the problem of changes.

If the team had compiled an application to make the movie, it would have worked but the problem was that once it was launched, whatever assets are in the ‘game’ are the only assets the team can access. “If you complied it, and then you were standing on Pride Rock with 10 people about to film in VR, and someone wants a new tree…, you’d have to say ‘all right, I’m going to kick everybody out. I’m going to shut the program down and load new assets and then bring everybody back in because everybody’s on distributed clients’” he explains. By running Unity in editor mode, new assets could be added at any time, without restarting. This was a major difference in the practicalities of filming.

Latency

The system had to work with very low latency. It is nearly impossible to operate a virtual camera if it does not feel immediate. If one pans a camera and stops on exactly what you like in the viewfinder – but actually there is a latency lag, then when the operator stops, – the system keeps tracking on for a beat longer and overshoots. Magnopus managed to achieve a latency of fewer than 4 milliseconds, while still ‘filming’ at a high enough frame rate to capture the detail of all the movements (keyframes). The latency is directly related to scene complexity. The team, therefore, had to be very skilled at translating high-density assets into low poly count assets that would allow good performance on the soundstage. A series of tools were developed to both decimate assets and to convert any visually distant assets into a temporary cyclorama. It was important for the filmmakers to see off into the distance in the wide shots, but with a 200 square kilometer virtual set, only the assets within a few hundred meters needed to be fully 3D at any time. The needs of high-performance game engine playback and yet rich visual film assets are competing goals. The solution ended up being an implementation of multiple levels of rendering. “We needed the camera station as much as possible to get 120 to 240 hertz or frames per second so that we could have plenty of keyframe data to draw upon, ” explains Grossmann. “Since all of our computers were running in sync and networked together, we took another computer and said, ‘let’s put some really high-quality imagery in here, turn on ray tracing and add the very high-quality assets to only this one machine’”.  The result was that almost all the Unity machines were using high-performance assets and running at 120 fps, but there was one machine that wasn’t able to keep up. That one machine could like barely render 20 frames per second, but looked a lot better, as it was using the high-quality assets. “We set up two monitors next to Caleb (DoP). One was the monitor he used to operate (running at 120+fps), and the other was the monitor that he used to judge lighting”. This second lighting machine had soft shadows and better key lights etc. Both of these computers were designed to aid in live production. Additionally, whenever a clip was cleared to go to editorial, a separate machine would render the best version possible of that shot Sometimes this ‘best version’ was as slow as 1 frame a second, due to all the ‘bells and whistles’ being turned on. This computer was not used of live production filming but it was placed in an automated sequence pipeline to produce the best imagery for editorial. A 10-second shot could be automatically re-rendered in roughly 4 minutes and loaded in the background, ready for the editing team whenever they needed it.  A second per frame is very slow for a game engine but lighting fast compared to final VFX render speeds, which can take hours a frame to produce.

The Virtual Process.

Andy Jones felt strongly that the animation should have no motion capture component, so all the film’s animals are all keyframe animated. This worked extremely well, but in the early stages, there was provision to puppet a digital animal on set to explore blocking or trail a new idea. In the end, all that was required was the ability to occasionally slide a character so that from the camera view the action was clearer or the blocking was slightly adjusted.

The primary approach was

  1. Keegan-Michael Key (Kamari) and Eric André (Azizi) in BlackBox

    The voice actors recorded dialogue individually, with the exception of some scenes. For example, scenes between Billy Eichner as Timon and Seth Rogen as Pumbaa. These actors not only recorded some of their dialogue together but the Director filmed them in a ‘BlackBox theatre’ set environment where they could act and walk around in an empty space (with no computer use at all). As with all the animation, this done to get good voice performances and was not recorded for motion capture. It was filmed for reference and later editorial discussion. “We’d basically take a Seth and Billy or some of the other actors and throw them in the middle of a giant rectangle surrounded by cameras and then they would act out a scene. They would have room to move around like actors and do their lines,” recalled Grossmann. “Jon would then direct those performances and say, ‘Okay, that’s great, we love that’. These clips would be the reference clips that we would send to the animation team and that audio could be cut into editorial.”

  2. James Chinlund and his team lead by Vlad Bina and Tad Davis, designed and build the scenes and reviewed them in VR in Unity using the VPS.  The crew on the sound stage in LA would often use these scenes to scout in the VPS.  When ‘locations’ were approved, they would be handed off to the MPC Virtual Art Department (VAD).
  3. The Crew did not need to wear VR headgear, although they often did

    The Director, DoP, VFX Supervisor and a special team worked on the custom sound studio in LA. This sound stage was designed to allow them to ‘film’ in VR. The ‘stage’ had traditional film gear, including tripods, dollies, geared heads, focus pulling remotes, cranes, and even drones, but the actual ‘cameras’ were all inside Unity. The actual stage was not that large, but even with the modest stage size of only about 70ft by 40ft, the team filmed most of The Lion King in just about 1/3 of the total space. The process proved so effective that the team did not need vast spaces. The final system was primarily the Magnopus VPS developed further from the D23 test and constantly refined and added to as the production needed. Ben Grossmann designed and oversaw the software and hardware development for the VPS under Rob Legato’s supervision, as well as overall operations.


  4. Animation was in Autodesk Maya

    The animation team at MPC would animate a scene and provide that animation to the LA sound stage. These assets both environmental and character were logged into the virtual stage management system. MPC maintained a database so all the assets were version numbered and every piece of data related back exactly with the correct version of the asset, take, scene and edit. This automated process was vital as assets would round trip to and from the stage, and any changes needed to be automatically logged and recorded. The MPC team needed to have a 100% confidence that any onset timings or changes were seamlessly recorded and feedback into the next animation iteration. Headed by sets supervisor Audrey Ferrera, the MPC team in LA would import animation from Andy Jones’ animation team and adjust and optimize layouts for the game engine.  MPC Virtual Production Supervisor, Girish Balakrishnan would oversee the workflow and convert the scenes into a format ready to move to the stage and confirm that asset tracking for MPC’s asset system was functioning.

  5. When assets or animation was ready and approved by the MPC team, they would move to the MPC ‘Dungeon Master’ in the machine room of the LA sound stage. The MPC bridge between their work and the VPS on the sound stage was nicknamed the ‘Dungeon Master’ by Jon Favreau. The Director left two six-sided DD dice on the table one day, next to the computer and called the station the Dungeon Master, – as this computer decided the ‘map’ that all the team would ‘play on today’. Virtual Production Lead John Brennan and Virtual Production Producer AJ. Sciutto, would get a scene hand-off from Balakrishnan.  Once would confirm it was ready to shoot the team would push the data to the computers on the stage, and confirm that everything was ready to shoot.  From that point forward the shoot day could begin.

  6. The creative team would then film that animation sequence with VR cameras in the VPS. Key to this process was that multiple people could join the same VR session and see each other. The various pieces of traditional camera equipment were geared and wired into this master set up. For example, a dolly pushed in the real world would move the virtual camera matching it exactly.  The VR world facilitated a skydome and cinema style lights. For example, if the team was filming up on Pride Rock and someone ‘added’ a Redhead (500W Tungsten light), then a virtual light (looking like a redhead) would appear and a virtual C-stand would extend down to the ground (no matter how far that was). The director could walk over (in VR) to the master camera and ‘tear’ off a copy of the VR video monitor from the top of the virtual camera. He could then walk to any spot he liked and build his own video village. This could be ‘miles’ away from the action, but of course, in the sound stage, he was physically just a few feet away from the main crew. Faris Hermiz from the actual Art Department (not the virtual art department) would often be in VR during the shots as a “set decorator” preserving continuity and making sure that the set was used as James Chinlund had designed it.

  7. The view from the Editorial section back to the main stage

    The First AD Dave Venghaus would orchestrate the shooting day’s work. Caleb Deschanel and Rob Legato were assisted by Michael Legato, Key Grip Kim Heath, various grips and crane operators. In VR they were helped by the Magnopus VPS Operations team of John Brennan, Fernando Rabelo, Mark Allen, and often engineers Guillermo Quesada and Vivek Reddy. Once a scene was shot, it could be immediately reviewed in the editorial machines at the back of the stage. Every aspect of each take was recorded, as individual channels references with the day and time of the take, take number and the relevant asset register of character and environment version numbers.


  8. All the recording was done in the VPS on stage. When the first AD would say “cut” the VPS machines would collect their local recordings of all the changes and animations, and send them back to the MPC database PC so they could confirm they captured everything.  If MPC wasn’t on the stage, (eg. during some reshoots or pickups, or recamera-ing after principal photography), the VPS had all the same functions built-in and could send the same packages back to MPC. Either way, complete shots were always returned to MPC.
  9. For the animators at MPC, the first thing they could choose to do is ‘enter’ into their copy of the virtual set in VR and have a look at what was shot and how the team approached the scene. This step is theoretically unnecessary, but as most animators would agree, it is really advantageous to just have a look around on set as an omnipresent observer and get a feel for how the creative team was approaching each scene. It was also free to do both in terms of setup and data wrangling.
  10. When the animation, lighting and fur sims were finalized, the LA creative team got one last chance to check the lensing of any shot. Again this step may seem redundant, but it allowed for the occurrence that with the various animal’s fur or secondary motion, such as a tail swipe, – a slightly different blocking or framing might improve the shot.

  11. Once whole scenes were done the team could also preview in VR what any scene might look like in Stereo. Normally it is impossible to visually replicate an IMAX experience, as any monitor will always be closer and way smaller than an IMAX screen, in relation to the fixed distance between someone’s actual eyes. But with VR system, the team could simulate watching the material in a virtual VR IMAX theatre and satisfy themselves the stereo convergence was correct. (Many of this team had previously worked on the Oscar-winning natively stereo film Hugo by director Martin Scorsese ).
  12. MPC rendered the final imagery in RenderMan. For non-stereo reviews, there were two review theatres built at the LA sound stage so the shots could be reviewed in a standard Dailies environment.
THE LION KING – Featuring the voices of Beyoncé Knowles-Carter as Nala and Donald Glover as Simba,

Hiding Computers

Magnopus engineered The Loin King sound stage to also look and feel very different from the earlier Jungle Book stage.  Grossmann recalls that when he used to walk on The Jungle Book set, “it felt like you were walking into a computer factory with a stage shoved in the back corner” he jokingly recalls. “There were tents full of people on computers and rows of desks with computers all over them. All of those people were doing really important stuff and essential to the process,  but there’s something about walking onto a movie stage. There is a reason why art galleries have art on the walls and very little else in the gallery- it is because you don’t want to distract your attention from the thing that you’re supposed to be focusing on”. On The Lion King, there was no imposing presence of technology on the stage.

While many respects of the new film were vastly more complex than The Jungle Book; when the creatives walked on the set, everything was simple and easy to set up, with most of the 20 VPS computers not even on display. “We certainly had tons of computers on the Lion King, but they were workstations that we pushed up against the walls on the outside of the stage. There weren’t people sitting at them doing stuff”. The stage was carefully staffed so there were not loads of people on the stage just sitting in front of computers. In addition to the core creative team (Director, DOP, VFX sup, first AD) and the camera department (Focus puller, etc) and grips, The Lion King stage had just an editor, a representative from MPC who also handled asset control and very few other people.

There are lessons (and even surprises) from this approach

Quality playback matters

Ben Grossmann noticed that the better the quality of the animation and the playback that crew was seeing onset, the more engaged the crew were in the detail of their own work. It mattered that the animation from MPC was not rough blocky old-style previz, but rather refined and articulated animation. Some of The Jungle Book’s virtual production attempts had been with expressionless ‘stand-ins’. Grossmann, as an observer on set, commented that “the better things looked, the more seriously the crew considered their work when filming them.” He explained that “if we were shooting a scene file that had characters that looked good, something that someone had obviously had time to put a lot of care into, then there was a heightened sense of tension on the set.  The level of engagement of the film crew always went up and people took it more seriously when the imagery was more refined. I feel like we did our best work when the scenes looked good”. This is why the production bothered to produce the material to the level they did. In theory, lighting or other things could have been much simpler but the lower the quality of the footage that the team was looking at in VR, the “lower the quality of the engagement and connection that the film crew had to it” he adds.

Need for Second Unit

One of the interesting aspects is that this project reduced the need for second unit. The virtual production stage team in LA did not have great time pressures or schedule issues. Moving locations took only seconds. If a location needed to be revisited for a pickup shot this could be done again in minutes. “Shooting second unit at the same time as the main unit was a rare thing. We rarely had to, but if we wanted to, you could easily because you could technically run three or four shoots that are totally independent, running completely different scenes, in that same stage at once. But this was never really needed as we typically shot very quickly” Grossmann recalls, adding extraordinarily, “Caleb (the DoP) could shoot 110 setups in a day without going into overtime”.

Multi-track

This style of filmmaking, while designed for traditional film making collaboration, also allows for a new form of multi-track filmmaking. For example, if there was a complex shot, and just one person, perhaps the focus puller, was slightly off in their timing. It was possible to lock in everyone else, camera operation, dolly action, etc and just play the whole shot back while redoing just that one ‘track’ of focus pulling. It was possible to play everything at half speed to aid in pulling off an otherwise nearly impossible refocus, or any other manner of playback speeds. While this was done rarely when it was chosen as an option, it was never decided upon lightly, “What’s funny is that we would take those times very seriously,” says Grossmann. “…And then we would comfort ourselves by acknowledging that all cinema is based on deception and we’re not documentary filmmakers!” he knowingly recounts. A production in the future could go even further, (not that the Lion King team did this), but the DoP could have chosen to do every role himself and just lay down the shot in multiple passes. First record the camera move, then lay down the framing/camera operation and then the focus pull, etc, one ‘track’ at a time. This is analogous to a modern audio recording.

Reactive Virtual Cinematography

The opening shot is a very good example of how the film making process lent to a form of natural reactive digital cinematography. The film opens with a small mouse scampering through the long grass and along rocks and logs. The virtual camera operators were ‘filming’ the mouse running through the grass, they did not model the camera movement in a way that feels preplanned. Thus this opening sequence has a very live-action feeling as the camera is a fraction behind in its framing in a way that feels completely natural. This respect for the traditional craft of film making, dating back to analog earlier times, was key for making the audience see the highly realistic animation as live-action footage.

Rob Legato (left) discussing a shot on the LA sound stage with DoP Caleb Deschanel (centre)

Losing the Director

Crane shot in the virtual world

While the digital tools were modeled to capture the limitations of their real-world equivalents, they were also highly flexible. For example, a techno crane could be mounted on top of a dolly, on top of a cliff, in just minutes.

It was also possible to travel vast distances in the virtual world, and so the team quickly added a special feature in the VR menus that allowed the crew to immediately teleport to where ever the director was. The crew was always able to hear Jon Favreau talking to them as they were in reality only a few feet away from him, but especially during virtual scouting of locations in VR he might end up virtual miles away.

While shots were being set up sometimes, Jon Favreau would “be sitting there in VR and there’d be these little rocks and he would pick up the rocks off the ground and he would make like one of those little sculptures of balanced rocks.  He’d just start stacking things up and playing with bushes and trees and because they were out of the shot it did not matter” explained Grossmann. “But then at some point in the movie, we might be filming that area. At which point the art department was suddenly confused,- ‘what the heck is going on with those rocks over there? Somebody made a snowman out of rocks’!  Only to then find out, ‘oh no, –  Jon was sitting up there and he made a snowman out of rocks’ “.

Puppets on set

While not used on The Lion King, the Magnopus team also had an option in the VPS where anyone could ‘be’ any character. Someone could ride along ‘on them’ or ‘be them’. In which case the creature would walk with its own walk cycle, but move where ever the individual moved in VR and additionally, wherever the real person looked, the animal would also look in that direction. The idea was to allow anyone to mime out an action they might want from a lion without having to ‘control’ the rigged character with traditional tools. “We made it so that you could just walk around on the stage and your center of mass would drive the animal center of mass. And if the animal was walking over terrain, the system would automatically conform to the train even though you were walking on a flat stage outside the real world,” explained Grossmann.

DoP Caleb Deschanel (elft)

Not all the crew operated in VR all the time. While some of the crew could be working in VR, others would be seeing the ‘video’ split on monitors around the stage, able to do their functions without the need to wear a VR headgear and with both their hands-free to manipulate the film gear.

Stage Design and adding a TV Station

The basic stage was just an empty box when the team first moved in. First, the team soundproofed the facility ready for the BlackBox theatre. “We really trusted Kim Heath, the key grip, he just did a mind-blowingly phenomenal job of rigging that space with truss and old fashioned ropes and pulleys. In the stage, you could swing down from the ceiling arms that had Valve lighthouse trackers on them for each different volumes”, explains Grossmann. The space needed to be able to be divided into different volumes for the VR gear to work. Each space had to have isolated infrared, with the walls painted matte black so they would not reflect any of the infrared beams. “Sometimes we’d have to use night vision goggles and fog up the stage, just to see where the infrared bleed was coming from”, Grossmann recalls Infrared leakage would up screw up the tracking, “if too much infrared light overlapped between different volumes. He could then be able to see that with the smoke and special goggles.  That’s basically what night vision goggles allow. So you can imagine there were days when we had effectively a bunch of ‘black ops’ VFX people in there trying to make sure your virtual production tracking was on point!”.

The stage was then fitted out with one of the first Opti-track active tracking systems. The OptiTrack Active Tracking solution allows for synchronized tracking of active LED markers. The active tracking solution mainly consists of a Base Station and a set of active markers. The active markers can either be active markers via a Tag and/or markers on an active puck which can act as a single rigid body. A Tag gets RF signals from the Base Station and correspondingly synchronized illumination from the connected active LED markers. The active markers are never able to be mislabeled, each prop, controller or camera gear gets a unique id, so once set up, on any given day the team could “just turn things on and it would just work and we’d know what each thing on the stage was”, explains Grossmann referring to all the cranes, tracks and items such as the other camera department gear.

All the computers were provided by HP with NVIDIA graphics cards. There were some custom-built machines the team made for unique or special uses. “We’d make them double water cooled and all that stuff because we were constantly trying to improve image quality. We were over cranking our computers and in some cases, we overclocked them by an additional two gigahertz!  Every once in a while you’d melt a computer or you’d melt a processor because you have a heavy scene and it just couldn’t take it anymore and it would smoke” he humorously recalls.

Blackmagic Design provided a large amount of the video gear the team needed for reference cameras and switchers for the editorial team. Grossmann points out that in addition to the data paths onset, the team had video playback and video assist with monitors everywhere on the stage. “Aside from having this digital network, we basically needed to build an entire video network as though this was a live broadcast television studio”. This was because every computer on the stage was a view into the world that the team wanted to record.  “We ended up building a television-style control room. I used to work in broadcast television. So when we were designing the stage, I designed a broadcast control room on one end so that we could put all of the video equipment and all of the control panels in the broadcast control room”.  This meant installing switchers, routers, color correctors and video recording decks. In order to do that, “we really needed Blackmagic Design’s equipment. We were really stressed out about that when we were trying to budget and plan the whole thing out, and Blackmagic simply said ‘you go worry about doing the hard stuff and we’ll worry about all the equipment’. They did just that, and we were able to build some pretty crazy stuff, that worked brilliantly” Grossmann comments. “Generally speaking the big people who really kind of helped us out was NVIDIA, Hewlett Packard, and Opti-track and Blackmagic Design.

The Virtual team on the Animation team

Ben Grossmann, an Oscar winner himself, was incredibly impressed with the work from MPC. “I think that the work that MPC has done on this film has just moved the bar forward for the whole industry. Not just in terms of the aesthetic quality, – I’ve never seen anything rendered to this quality before” he explains. “But also the nuances of the animation. They really dug deep on every single performance in this movie and pushed it further than I think anyone before. I was humbled because initially I only saw the virtual production most of the time. I didn’t sit in on the reviews of the visual effects. So when we first saw the final material come out it was shocking, … it was amazing.” Magnopus actually had professional researchers who saw some of the final imagery out of context and assumed that it was new reference footage that had somehow been found.

“MPC blew my mind with the quality of the work that they did. And I would like to think that one of our ambitions from the virtual production team wasn’t just for the filmmakers, it was for the visual effects artists,” adds Grossmann. “I think it was great that we gave freedom back to the animators, – to avoid confusion, -remove a lack of clarity – avoid endless re-work, because some of the time you can wander the desert in search of what a shot is ‘supposed to be’… and you want instead to have the time to produce the highest quality possible,.. and I feel like we contributed to that”.

MPC

The normal turnover on a film would involve a file from the editorial to the animation teams with just the plate photography for the selected shot and its metadata. On The Lion King, MPC would get a turnover package. This would contain the previous reference material that MPC themselves would have provided, including any previz passes. It would also have the references for the various sets, and reference cameras. It would contain a lighting package and all the onset data including a VR Unity setup.

Pipeline

“Stage one for us at MPC,” explains Oliver Winwood, CG Supervisor, “was to load the assets, and ingest those back into our pipeline. We’d ingest the lights and cameras and then do a basic render through RenderMan. This basic render would be delivered back to the stage. “This was like a ‘version 0.0’ if you like, and this would verify everything was correctly loaded. This gave us a very good idea of lighting, direction, framing and mood”, Winwood adds. Winwood himself would also always fire up the VR version of the assets and just ‘visit’ the set himself to get a feeling for the whole virtual production. The reference turn over material would have all the output reference cameras and the edit itself, but he still liked to “just get a sense of the layout and how all the elements in the location worked with each other”.  MPC had a smaller but mirror version of the LA sound stage permanently setup in their offices. “The first scene we did was the Elephant graveyard sequence.

The MPC team could then visit the set of the elephant graveyard in their own offices “to get a really good sense of kinds of distance, and what we were putting the characters into, and the kind of scale of everything. It was generally a really good tool for us”. He goes on to add that in his opinion virtual production isn’t just useful for the actual shoot, it is also something that “we can revisit at any point and walk around it. It was really quite cool and very useful”.

A lot of the camera work was used exactly as shot, but a significant amount also had to be changed for animation adjustments, which required minor layout tweaks. If the required camera work was more than just a modest adjustment, then the material would be immediately exported back to the main stage in LA for new camera work.

Once the material was ingested at MPC, the team would very much stay in Autodesk Maya for all their animation work. Some members of the Layout team did use Unity to do an additional check on the environment work, but only when assets were sent back to the LA sound stage would Unity factor in again for MPC.

Florence Kasumba, Eric André and Keegan-Michael Key are the hyenas, and Chiwetal Ejiofor played Scar.

Adam Valdez in his role the Visual Effects Supervisor, and some other key MPC staff, such as members of the Environment team, were in LA, but most of the MPC team was in London, “even the previz team was based at MPC in London, – yet the system of asset management worked extremely well”, Winwood recalls.

One of the most complex animation scenes was the stampede. The shot had a huge number of characters, but the actual set was also very complex. All the cliffs and rocks were modeled and even with instancing, the 3D scene was extremely demanding. A close contender for the most complex shot to render was the Cloud Forest scenes, due to the vast amount of organic detail. “The more we were trying to fit in a space at any one time would always push the render limits. Actually, the Cloud Forest was probably our heaviest sequence, now I think about it. You’ve got an environment plate that covers hundreds of plants, trees, grass, and vegetation. Definitely those would have to be our most complex shots to render !” Winwood estimates. Many of the scenes had their own complex challenges. The bugs sequence that our heroes dine on, was complex “You’re looking at really highly detailed assets which you actually trying to match all these little feet interaction on. There’s a lot of passes and you are always trying to make them collide with each other” he explains. Perhaps the most difficult creative shot was the Dung ball sequence with the tuft of hair.

The hair sequence is elaborate and covers vast sets that are otherwise not used. The tuft of hair needs to be blown and animated, interact with water simulations and rigid bodies, and always remain readable to the audience. “Even scenes we have seen before like the desert, we now needed to go down to at a macro level and see individual grains of sand. All the sand was individually simulated…and the same with the water” he adds.” The film has plenty of waterfalls and rivers but suddenly you are having to look at a tuft of hair that’s no more than maybe a centimeter in size and we had to fill a good chunk of the screen with the closeup water – that was particularly challenging”

JD McCrary as Young Simba, not the complex hair

Simulation

The complexity of animated the adult lions such as Simba was made more difficult by the adult mane. The volume and movement of it directly affects posing, readability, and performance but fur simulations are traditionally costly. On The Lion King’s animals, all the grooms were split up by hair length, so they could be accessed separately. Winwood explains, “for example, the body fur on the main body was separate from the mane. Generally, the body fur was not simulated, we focused our simulation work on the longer fur. The body fur normally got enough movement from the animation and the underlying muscle stimulation and skin simulation on top”. There was some simulation work done around the mouths of some character especially on Mufasa (James Earl Jones). The majority of the simulation was on the mane. The process started with the groom using MPC’s in house Fertility fur grooming software. “We moved to version 8 of Fertility for this film, and then from there to Houdini. This was because by going to Houdini we were not just restricted to guide curves”. In Fertility, most of the work is using guide curves with the fur generated around those curves but in Houdini, “allowed us to take as much or as little of that groom as we wanted. So for instance, on some shots we might be taking a smaller percentage” explained Winwood. “We did some test on certain shots hero shots where possibly even 40 or 50% of an input groom was actually being stimulated with the rest of the groom being mapped back on. Whereas on some more difficult shots, we take as little as 1%, just to speed things up”.

The mane simulations went beyond secondary motion, for example, when Mufasa is sitting on Pride Rock in the Patrol sequence, the team worked hard to get simulate the wind blowing through his mane. “We did a lot of work on getting wind settings correct, to get the correct amount of occlusion from the hair and reaction. These were very heavy simulations, for some of the long shots, we could have the system simulating for the best part of the day at a time”, Winwood recalls.

Rigging

A good deal of work went into improving the existing rigging systems and making the rigs faster for the animators to use, while also expanding the amount that could be previewed within a reasonable playback speed. A lot of time was spent looking at footage shot in Kenya in order to build rigs that highlighted the animal’s mechanics. For example, the team focused on what happens to the skin and fur when a Lion retracts its claws. Another example of these animal nuances involved Zazu the Hornbill.  MPC used the puffing of his feathers to emphasize certain words and expressions.  The rig puppet had to have a representation of that puffing, which as closely as possibly flowed through to MPC’s feather system so the same puffing was seen in the render of the final character.

As for muscles and skin, the skin sliding set-up was more sophisticated than MPC had ever previously done and allowed for complex movement across the entire muscular structure. The muscle simulation tools also have even greater connectivity with its characters skeleton, resulting in a more realistic and anatomical result. “Each muscle would be simulated automatically when the animation was baked out, while also giving the Techanim team the ability to go in and make adjustments to the simulation if needed” explained Winwood.

Animation

The animals were all hand-animated, the flocking and herding animation was handled by MPC’s Alice software. ALICE stands for Artificial Life Crowd Engine. It is MPCs in-house crowd software that was created originally for 10,000BC in 2008. ALICE Has been steady and continuously updated and allows the artists to manage herd or crowds and customized scripting for large groups of agents. For some time it has been one of MPC’s flagship in-house software tools and the team have been slowly transitioning ALICE to Houdini.

Final Images

The use of physically plausible lighting via RenderMan set in Katana added to the validity of imagery at the end of the virtual production approach. Rob Legato commented that “It was ‘real light’ on a properly groomed animal, as soon as you see the real light, via a ray-traced simulation, it just comes to life. It just looks like the real thing” he explains. “We were still blown away every time we saw it because our choices were correct as cameraman and with all the various things MPC did to make something look good, …when you finally see it with every different hair structure built on the animal, catching light, the way a real fur catches light, – even for me, it becomes very impressive”

Genesis

MPC has now developed its own on set production tool called Genesis. This was first demoed at SIGGRAPH 2018

Share if you enjoyed this post!



Source link

Sci-tech 2019 areas of Investigation


The Academy of Motion Picture Arts and Sciences announced that nine distinct scientific and technical investigations have been launched for 2019.

These investigations are made public so that individuals and companies with devices or claims of innovation within these areas will have the opportunity to submit achievements for review.

The Academy’s Scientific and Technical Awards Committee has started investigations into the following areas:

  • Professional desktop monitors with self-calibration
  • Head-mounted facial acquisition systems
  • Wireless video transmission systems used in motion picture production
  • Frameworks enabling high-performance ray-geometry intersections
  • Hair simulation toolsets
  • Post-production tracking and scheduling systems
  • Automatic dialog post-synchronization systems
  • Audio repair and restoration software for motion pictures
  • Costume, prop, hair and makeup tracking and inventory communication tools for physical production
Doug Roble

“The science and technology of filmmaking is constantly evolving and advancing. Each year, the Academy researches technology that has had a significant impact on the motion picture arts. This year, we are examining a distinct group of technologies, which includes hair simulation, facial capture and audio repair,” said Doug Roble, chair of the Scientific and Technical Awards Committee.


The current awards cycle will commence with a series of exhaustive investigations, conducted by a committee made up of industry experts with a diversity of expertise, and culminate with the Scientific and Technical Awards ceremony in June.

The deadline to submit additional entries is Tuesday, September 17, at 5 p.m. PT

While these nine areas are the stated areas of investigation, this list normally changes over the investigation period as the interviews are done and various submissions are reviewed.

Share if you enjoyed this post!



Source link

VFXShow 243: The Lion King


The Lion King is the Disney computer-animated musical film directed and produced by Jon Favreau, and written by Jeff Nathanson.  The film is a photorealistic computer animated remake of Disney’s traditionally animated 1994 film of the same name. The film stars the voices of Donald Glover, Seth Rogen, Chiwetel Ejiofor, Alfre Woodard, Billy Eichner, John Kani, John Oliver, and Beyoncé Knowles-Carter, as well as James Earl Jones reprising his role from the original film.

The plot follows Simba, a young lion who must embrace his role as the rightful king of his native land following the murder of his father, Mufasa, at the hands of his uncle, Scar. Plans for a remake of The Lion King were confirmed in September 2016 following the success of Disney’s The Jungle Book, also directed by Favreau. Principal production began in mid-2017 on a sound stage in Los Angeles. With an estimated budget of around $260 million, it is one of the most expensive films ever made.

The DOP was Caleb Deschanel, and the VFX supervisor was Rob Legato. The virtual production was spearheaded by Magnopus, lead by Ben Grossman with the breathtaking animation produced by MPC. Together the team built on the lessons learnt on the Jungle Book to advanced the craft of photorealistic virtual production.

The Lion King featured the voices of James Earl Jones as Mufasa, and JD McCrary as Young Simba. Disney’s The Lion King was directed by Jon Favreau.

The fxguide herd assembled for this episode:

Pride Rock (IMAX aspect ratio)

Share if you enjoyed this post!



Source link

How Old is Cap at the End of Avengers?


Marvel Studios’ Avengers: Endgame, is the climactic conclusion to an unprecedented, 11-year cinematic journey in which the Avengers take their final stand against Thanos. It also delivered the biggest opening weekend in history and the film is now the highest-grossing film of all time.

Avengers: Endgame featured a surprise twist at the end of the film, with Captain Ameria taking the long way round to get back to the present day. This meant that Chris Evans appeared as an old man at the end of the film. Lola VFX supervisor Trent Claus was responsible for supervising the old Cap transformation.

How Old is Cap at the End of Avengers?

If you one does the maths – as the filmmakers did, then allowing for how old Steve Rogers was when he was transformed, and assuming he did not age while he was frozen, then by the end of Endgame he should be almost a 120 years old. But then Steve Rogers is .. Steve Rogers. For design purposes, Lola VFX used a younger age target for Chris Evans. “The working assumption was that he’s around 119 in human years. But of course for Cap, that puts him ambiguously around his 80s or 90s,” explains Claus

Onset Chris Evans wore some prosthetic makeup, but this was crafted before LolaVFX supervisor Trent Claus had been able to develop the final look for the Old Captain America. So while many people have assumed this was the basis for the visual effects aging, the first stage of doing the final aging VFX was to digitally remove the makeup seen in the original plate shot, back to how the actor Chris Evans actually looks himself. Only then did the LolaVFX team add digital aging to achieve the final imagery.

Original Plate before the physical makeup was removed!

Lola had done old Peggy in Captain America 2: The Winter Soldier, and they learned a lot doing that film. One of the lessons was the work required to do extensive neck aging. For Old Peggy, actress Hayley Atwell was shot without any special effects makeup. “For principal photography, we had decided that she should be shot clean, except for her wig, but we found that so much time was spent on her neck and that it isn’t really worth it, cause nobody cares about the neck, but yet you have to do it” explains Claus. “On this one, we really wanted some help, so we talked to Legacy Effects and they built a great neck prosthetic for us, which was phenomenal and it really helped us a lot and saved us a ton of time”. Unfortunately, it was also decided to add some physical makeup to Chris’s face and this ended up having to all be digitally removed.

One of the first problems for this style of work is arriving at what the final character should look like. Unlike Lola’s de-aging work, there is no absolute reference of what the older character should look like. The first major task Claus does on these types of projects is to work up a visual treatment for what an older version of the actor might look like. “We do a whole lot of look-dev. We did look-dev on old Cap for over six months. At this stage, I was just trying to nail down what he should look like, where wrinkles should be, how many wrinkles there should be, how many imperfections he should have, such as age spots and broken blood vessels and capillaries, etc” he explains. Surprisingly, Lola tries to avoid doing this using Photoshop as they have found some clients can fall in love with a Photoshop trick that the team then cannot later match in a final VFX comp. They prefer to build the target look in the comp, typically in Flame, just using the Flame tools they will have for use on the actual final shots.

Patrick Gorman (2017)

As with other Marvel films, Lola got to be involved in the hiring process for an actor to provide a face reference for old Steve. Actor Patrick Gorman was selected as Old Cap’s reference. At 5′ 10, the actor is about 2″ shorter than Chris Evans, but Claus felt Gorman had the right bone structure and facial features to be a good reference. The Californian actor plays characters typically in the 65-80 year old range. “We were looking for an actor with weathered skin and character”, explained Claus. As part of this process, the Lola team request acting clips of the actor so they can study not just how the double looks, but how their face moves and the expressions they make when smiling for example. “We took Mr. Gorman out to Atlanta, where Avengers was shooting’, he explains. “Chris Evans, as the primary actor, shoots the scene exactly as he wants.  And then for every shot, Patrick is in the wings watching and trying to glean any mannerisms, actions or expressions that he can mimic to create the same performance”. For each take, as soon as Chris Evans has done acting, “he would step out and Patrick will step right in, so that we can keep the exact same lighting, keep exactly the same camera setup, etc and then Patrick would act out the same scene”.  While this is happening Claus would be studying a video split and comparing the two performances, and advise how to slightly adjust a head angle or timing to better match the head movements of Chris Evans. While the two performances will never perfectly match, the team can then extract key poses and expressions to take back to Lola for use in aging Steve Rogers.

The final shot

The next stage is for Patrick Gorman to be scanned in Lola’s custom lighting rig in LA. Lola does this not to make a 3D Digi-double but to use as further reference. The lighting sphere can be driven by a light probe but often times the lighting in this scanner is set by eye carefully by the experienced Lola team.  “We have used onset HDR data, but more often than not we just end up lighting by eye. Sometimes you have to accentuate certain things more than what they were on set to get the desired result”. For example, if the double’s nose is smaller than the target actor, the shadows cast on the double’s face won’t be the same as should have been when they stood in onset. In this case, the team would manually reposition the lights slightly to cast a longer shadow across the cheek of the stand-in reference double. As this is extremely precise work, the team always do a pre-programming session to reduce the amount of time that any actor would need to sit in the rig. Interesting, the lensing of the original photography of Chris Evans is not relevant at this stage. “No, we don’t have to match to what was shot on set at all, thankfully. Everything we do here at Lola is just treated as an element that gets projected onto what was shot,” Claus adds. The team did also make a photogrammetry head of both Chris Evans and Patrick Gorman but this is primarily used for 3D tracking purposes. The 3D head can be used as a target for projecting textures on to, but it is rare that the team will animate the 3D head unless there is some special need.

The actors are shot on Red Cameras at Lola, at 4K resolution. The onset photography from Marvel is also 4K but Lola finishes their final shots to a 2K master. The 4K footage just allows for greater accuracy in matching the subtle textures of the face.

Once Lola has both the reference material from on-set and their own additional photography, the team is ready to start “carefully going through shot by shot and matching the doubles performance to the hero performance” Claus concludes.

For Old Cap, in addition to skin texture and wrinkles, Chris Evan’s ears are nearly entirely replaced, as is his nose. “Old Cap’s nose has gotten a little bigger, as is normally the case when you age. It is based on Chris’s nose shape and everything, but I’d say it is 90% replaced”, outlines Claus. There is a lot of artistry in the process, and creative decisions For example, Gorman’s lips are pencil-thin, and this was not replicated for Old Cap as the filmmakers did not want Old Cap to look sick or unwell. “The creatives wanted him to look old but never pitiful,” says Claus. “They never wanted the audience to feel sorry for him. That was a big element of our discussion during the process: where is the line with each of the features to where he still looks like a superhero, but just very old?”.

The VFX work of the Lola team extended beyond just the face, “Chris’s hands are all replaced or augmented depending on how much they are seen on screen. We definitely had to work to make the knuckles more prominent and make everything finner. We then added age spots, discoloration, and made the tendons more prominent” he explains.

The work is done not only on a per-shot basis but balancing scenes for the aged Steve Rogers to remain consistent. At some point, technology gives way to experienced digital artistry. “It’s playing with, color, light, and shadow. For example, to give an impression of slightly translucent skin, we may make the Spec highlight land a tiny amount higher than what the apparent skin surface … it really is like digital painting in a way”. 

Digital Release

The in-home release of Avengers: Endgame is out on Digital in HD, 4K Ultra HD and Movies Anywhere, and the physical release on 4K Ultra HD, Blu-ray, DVD and On-Demand has just been released.

The release includes bonus material.

Bonus Digital Exclusive:

  • Steve and Peggy: One Last Dance – Explore Captain America and Peggy Carter’s bond, forged in moments from previous films that lead to a momentous choice in “Avengers: Endgame.”

 Also on the Blu-ray & Digital Exclusive are:

  • Remembering Stan Lee – Filmmakers and cast honor the great Stan Lee in a fond look back at his MCU movie cameos.
  • Setting The Tone: Casting Robert Downey Jr. – Hear the tale of how Robert Downey Jr. was cast as Tony Stark in the original “Iron Man” — and launched the MCU.
  • A Man Out of Time: Creating Captain America – Trace the evolution of Captain America with those who helped shape the look, feel and character of this compelling hero.
  • Black Widow: Whatever It Takes – Follow Black Widow’s journey both within and outside the Avengers, including the challenges she faced and overcame along the way.
  • The Russo Brothers: Journey to Endgame – See how Anthony and Joe Russo met the challenge of helming two of the biggest films in cinematic history … back-to-back!
  • The Women of the MCU – MCU women share what it was like to join forces for the first time in an epic battle scene — and be part of such a historic ensemble.
  • Bro Thor – His appearance has changed but his heroism remains! Go behind the scenes to see how Bro Thor was created.
  • Six Deleted Scenes – “Goji Berries,” “Bombs on Board,” “Suckiest Army in the Galaxy,” “You Used to Frickin’ Live Here,” “Tony and Howard” and “Avengers Take a Knee.”
  • Gag Reel – Laugh along with the cast in this epic collection of flubs, goofs, and gaffes from set.
  • Visionary Intro – Intro by directors Joe and Anthony Russo.
  • Audio Commentary – Audio commentary by directors Anthony and Joe Russo, and writers Christopher Markus and Stephen McFeely.

Share if you enjoyed this post!



Source link

Jake Schreier’s Real-time Music Video at The Mill LA


The Mill LA in collaborated with Norwegian musician Cashmere Cat and renowned Director Jake Schreier on the groundbreaking music video ‘Emotions’. The result is an advanced application of real-time technology that pushes the boundaries of creativity and innovation and allows for a ‘one take, real-time clip’, that is the first of its kind.

[embedded content]

The Mill was initially tasked with crafting an original CG character to represent Cashmere Cat in future music videos and appearances. The design team crafted a creature inspired by multiple references including Nordic folklore, Japanese anime, Fortnite, and early aughts gaming. A music video for the track ‘Emotions’ introduces the fairytale feline and her fantasy world. In order to facilitate this method within a fully-CG environment, The Mill’s artists and technologists executed an ambitious real-time rendered shooting technique.

Margaret Qualley and Jake Schreier rehearse.

Jake Schreier( Robot & Frank, Paper Towns)  directed the clip. It was Choreographed and performed by Margaret Qualley, with the MoCap & VFX by The Mill LA for production Company Park Pictures NY.

Schreier is known for his highly-choreographed storytelling that takes place within a single take, previously seen in music videos such as Chance the Rapper ‘Same Drugs’, Haim ‘Want You Back’ and numerous Francis and the Lights releases.  Schreier also famously directed the incredible one take for Shaina’s sequence in Episode 3 of Michel Gondry’s (executive producer ) show Kidding. In that show, guest star Riki Lindhome plays Shaina, a woman who’s inspired to turn her life around after watching an episode of “Mr. Pickles’ Puppet Time,” the kids’ show hosted by Jim Carrey’s character. In one take, viewers see Lindhome’s world evolve as she renovates her apartment, starts exercising, invites friends over and celebrates her new life. Behind the scenes, Schreier directed the “Kidding” crew to physically transformed the set multiple times in real-time.

[embedded content]

Cashmere Car was filmed on a motion-capture stage while a state of the art virtual production pipeline delivered a seamless visualisation between the real world and the simulated. A display screen rendering the CG character, landscape, props and structures in real-time. This allowed the director to, in effect, direct the CG character as opposed to the live-action performer, as she moved through the stylized world while being shot with a handheld camera. This allowed the filmmakers to adjust the character actions, the lighting, and even environmental textures, all whilst still on set.

[embedded content]

Combining Epic’s game engine UE4 with The Mill’s creative team, designers and artists gave the filmmaking team a fully interactive and immersive film making experience.

Schreier explains, “when Magnus first described what he was looking for to me, he said he “wanted to disappear.” He also was making a record that was very influenced by video games, so the thought of building him a video game avatar seemed fun. We started with older references, but his music isn’t a retro version of video game music at all, so ultimately a more modern approach felt right. Magnus and I had been playing a ton of Fortnite together so when The Mill proposed working in the Unreal Engine, it was a natural fit. It was interesting for me to try and meld some of the principles from the minimalist live performance videos I usually work on, with a more traditionally maximalist, animated world. It’s hard to explain how fun it is to have a camera in your hands that can see live into a virtual space. Also, Margaret gave a great performance worth capturing.”

Magnus Høiberg aka Cashmere Cat, adds, “We came to The Mill with a strange idea of a beautiful virtual cat, and I had so much fun working with them to bring her to life.”

Aurelien Simon, Executive Producer, Emerging Technology at The Mill in Los Angeles commented, “This reimagined approach to filmmaking still allows for traditional cinematic methods. Using this method, creative partners are able to see iterations, makes changes and experiment all in real-time. In this project; environment, creative, physics, and technology all combine to create a truly immersive experience.”

Ben Lumsden, Business Development Manager, Unreal Engine Enterprise at Epic Games, adds, “It’s great to see The Mill pushing virtual production in this way. Princess Catgirl represents a shift in music video production, and both Jake Schreier and Cashmere Cat have shown fearless creativity for this innovative result.”

Images from the clip and Jake Schreier Instagram.

Share if you enjoyed this post!



Source link

Conversation AI Advances


The Mill’s Mascot program using iPhone face tracking to power a real-time creature including Fur with secondary animation.

At fxguide we tend to focus mainly on visual effects, but the tools of animation, simulation, and real-time engines are allowing companies and artists to expand into areas not traditionally that of an old school ‘post-house’. There have been some impressive examples of this such as the real-time digital puppets from the Mill. Framestore has been doing brilliant work with Magic Leap in AR and DNEG, ILM and Weta all have teams exploring work away from traditional VFX.

With the growth of mobile, people have been shown to want experiences and real-time interaction. Adobe just published that 40% of consumers want to receive real-time offers and deal from Chatbots. For many medium-sized post houses they have seen business that was once being channelled into high-end TVC production now being channelled into experience driven e-commerce.

A key aspect of these new forms of entertainment and interaction is natural language interaction. Already many of us use Siri and Alexa on a daily basis, and the quality and error rates have been improving greatly over the last few years. Alexa, in particular, is remarkably good in understanding a wide variety of commands and instructions.  Good conversational AI uses context and nuance, the responses seem instantaneous but to do this the models need to be very large and run in real-time.

NVIDIA has taken another step forward in this area with some record-breaking real-time conversational AI. It has done this as part of its Project Megatron-LM, an on ongoing research into Natural Language Processing (NLP). One of the latest advancements in NLP, and a hot area of research is ‘Transformer Models’. These language models are currently the state of the art for many tasks including article completion, question answering, and dialog systems. The two famous transformers are Bidirectional Encoder Representations from Transformers (BERT), and the GPT-2. NVIDIA’s Project Megatron-LM is an 8.3 billion parameter transformer language model with 8-way model parallelism and 64-way data parallelism trained on trained on 1472 Tesla V100-SXM3-32GB GPUS & 92 DGX -2H (DGX SuperPOD) servers, making it the largest transformer model ever trained.

Google with Transformer and then BERT, Microsoft with Mt-DNN, Alibaba with their Enriched BERT base and FaceBook with their RoBERTa technology have all advanced conversation AI and sped up processing over the last couple of years.

NVIDIA’s Project Megatron-LM AI platform is now able to train one of the most advanced AI language models, BERT, in less than an hour (53 minutes) and complete AI inference in just over 2 milliseconds. This is well under the 10-millisecond processing threshold for many real-time applications, and a lot less than the over 40 milliseconds often seen in some CPU server implementations.

Some forms of conversational AI services have previously existed for several years. But until now, it has been extremely difficult for chatbots, intelligent personal assistants and search engines to operate with human-level comprehension due to the inability to deploy extremely large AI models in real-time. The issue is both one of training and of latency. NVIDIA has addressed this problem by adding key optimizations to its AI platform — achieving speed records in AI training and inference and building the largest language model of its kind to date.

[embedded content]

Early adopters of NVIDIA’s performance advances include Microsoft and a set of young innovative startups, which are harnessing NVIDIA’s platform to develop highly intuitive, immediately responsive language-based services for their customers. AI services powered by natural language understanding are expected to grow exponentially in the coming years. Digital voice assistants alone are anticipated to climb from 2.5 billion to 8 billion within the next four years.  We will see more conversational controls in ‘Smart TV’s, ‘Smart speakers’, and wearables. Additionally, Gartner predicts, by 2021, 15% of all customer service interactions will be completely handled by AI, an increase of 400% from 2017.

The good news is that NVIDIAs work in on Github now and accessible, it will be interesting to see how it is harnessed to produce new and ‘sticky’ user experiences in the months and years ahead.

Share if you enjoyed this post!



Source link