Integrating With Animation and Motion Capture

 

For this final project, we were challenged to work with not only a moving camera, but an animated asset that we did not create. This was a lot more challenging than I thought it would be! I’m confident in my camera tracking abilities, but I struggled a lot with finding an asset that fit the plates that I shot.

I wanted to take more advantage of our location here in Savannah with my filming, so I went out to one of my favorite places- Bonaventure Cemetery. Don’t worry, strictly historical now! I’ma lot of things, but disrespectful to the dead is not one of them. But I hit it right at the golden hour with some beautiful shadows, and I gotta say, I’m pretty happy with how these plates turned out! It bugged me pretty severely that my plates were so bland in the last projects.

Here’s a still image I shot of my location.

IMG_4193.png

I also took my own 360 camera HDRs on location! There’s a few tourists in it, but that’s just funny more than anything else. (This is also just one exposure of my HDR.)

SAM_100_0182.jpg

The next step was to choose an animation! Well, technically we picked them out before shooting, but I ran into some issues I’ll discuss in a second. Initially, I used motion capture data I obtained from my classmate, Austin Wright. His work can be found at https://acdafx.wixsite.com/austinwrightvfx. While there was nothing wrong with the data itself, (nice, clean uvs that don’t jitter and good mocap data) the geometry it was designed for was just way too dissimilar to the object in my scene. The model needed a flat surface to lay his hand on since he is intended to be sitting on stairs, but the plate just doesn’t have any that won’t induce major clipping.

Here’s a few screencaps to show you what I mean.

So yeah, that’s a problem. Quite a few of my classmates used free animations from mixamo.com for this project, so that was where I figured I would have the most luck. I ran into similar problems with these animations as well- lots and lots of clipping, as well as just action that just mismatched the plate.

For instance, I had a draft in which the model crawls through the scene, but the motion was just far too quick and far too horizontal for this very diagonally shot plate. The animation I settled on was one of the model standing up, which fit the line of action of the plate perfectly. Granted I did have to reverse the footage, but that’s our secret! He still clips awfully in the hands and arms, but that’s a problem for another day, and sometimes as compositors we just get bad assets. C’est la vie!

Another thing worth mentioning for this project since I spent a day on it is that I muddled around in Substance Painter for these textures! I am by no means a texture artist or look developer/lighter- I’m familiar with Mari, the process of UVing, and how to paint textures, but there’s a reason I chose to focus on compositing. I’m not strong in texturing, and I find it uninteresting in the first place. Long story short, I cobbled together some aged copper textures and I’m pretty alright with them for a first timer in Substance! I did get feedback that it should have more blue tones in the oxidization though, and I agree.

Here’s the diffuse map I painted and a little render of my guy.

So, let’s dig into the Nuke aspects of this project! This was a pretty simple plate to track, although I ended up deleting way too many markers on the foreground and that made the end result a little floaty looking. That’s definitely something that’s bugging me a lot about this project, and the very first thing I want to fix about it.

It’s got an error level of about .5, but the data loss to get there was just a really bad decision on my part. Here’s what it looks like.

I also really wish I had more time to refine that shadow, it was challenging and quite frankly looks like garbage. There’s a big hot spot right in the middle of the space between the long headstone shadow and the shadow of the leaves that made creating a shadow plate incredibly difficult. The saturation would be fine in one area, but off in another, or the shadow detail would be perfect in one area and far too much in another. A roto would fix this pretty easily I think.

I did roto out the shadow pass in the shadowed areas to avoid the dreaded double shadow, and I think that worked pretty effectively! It’s just a garbage matte, so it’s making me itch that all the grass detail is lost, but time constraints definitely came into play with that.

I also used an edge noise gizmo to try and break up the edges of my perfectly symmetrical and flat shadow, but given time, I think I would have preferred to put a displacement on my ground plane proxy and rendered the shadow out with irregularities in the first place. Nuke is wonderful, but she just doesn’t have the z data to support doing something like that unless I rendered deep data. Even then, not sure a gizmo would support that.

Here’s a couple screencaps of my node tree as well! Same as last project, the AOV splitter is a tool developed by my classmate, Sasha Ouellet, whose work can be found at http://www.sashaouellet.com/. (And who also has released a MUCH more refined version since then, but I haven’t had the time to download it yet.) I also made the occlusion node setup into a toolset, cause I always forget how the setup works. Don’t get me wrong, I understand the merge math there, but I do want to save myself the time and just load it in.

Compositing Subsurface and Transmissive Shaders

 

For our second project, we studied the refraction of light through different gemstones. Everyone in the class had a different stone form amethyst to selenite to tiger’s eye, each one presenting a different challenge in replicating its look!

The gemstone I picked was a perfectly polished rose quartz, with some pretty ribbons of different shades of quartz inside of it.

As with the first project and any match to live task, we had to take pictures with our integration kit! Two of my friends and I got together to shoot some small still lives to put our models in. We were allowed to take video for this project if we were familiar with tracking, but I chose not to so that I could focus more on the aspects I didn’t know how to do. I have plenty of experience with a lot of difficult tracks anyways!

Here are the images I used for this project, minus the HDR. Pictured is a single exposure.

Again I’m not particularly thrilled with the lighting of this setup, it’s not particularly dynamic. I really need to get more exploratory with my photography! I also tend to have a pretty light hand in everything from drawing to setting up lighting, so sometimes images I make come out a bit flat like this. It’s something I acknowledge and am trying to work on!

Anyways, here’s my grey ball render in comparison to the real world grey ball.

So, setting up this lighting was a bit of a challenge for me. The HDR did an alright portion of the color and lights, but the windows behind and the string lights had a difficult time coming through in the indirect lighting. So, I tossed in two area lights with blue tones where the windows would be and a yellow toned light that acted as the key light. I also toned the plane the ball sat on to be the same yellow as the table runner to get that yellow bounce light on the underside of the ball.

The shadows are pretty soft yet defined in this image, and avoiding a long stripe across the ball from the spot light was difficult. I counterbalanced that by allowing the indirect light to make up for the softness of the light. I also kept the shadow of the ball relatively defined like the reference plate. Looking at it now I see a bit of a double shadow, but where the vases in the left corner cast shadows the same phenomenon happens. The underside of that ruffle just amplifies the shadow stretching over it.

I also figured out the occlusion!!! I was really excited about that! I was essentially missing a step last time- rather than having a color corrected full shadow plate with an additional mask of correction in the occlusion zone, I was only multiplying my shadow by the mask of the occlusion which just shrunk the shadow. But I get it now!! Two color corrections. Although this might be a bit darkness on the occlusion.

Here’s my node tree for that.

I also wanted to work on my node tree layout. I try to keep a pretty linear workflow, but I wasn’t sure how to properly lay out my cg comps. After studying a few of Nuke’s example files, I think this is a little closer. Still not perfect by any means, but much neater than my workflow before. I tend to sometimes follow my own internal logic of B PIPE DOWN NO DIAGONAL LINES EVER too strictly sometimes. What can I say, I’m a neurotic organizer.

So, the shaders! Phew, we’re getting there, I promise. Long post, stick with me!

I tried a few different methods of balancing the subsurface and transmissive properties. I started with a mix shader that relied heavily on subsurface since rose quartz isn’t particularly clear. To get the ribbons of white and pink quartz, I used a procedural noise/marble texture in Maya. I stretched and edited it to look similar to the lines in the stone, and connected a ramp to the color to get all the gradations. I’m pretty alright with the result!

To get the clear shininess of the outer layer, I ended up rendering out a separate transmission pass that looked like a hazy pink glass. I then rendered out a Fresnel mask by making a facing ratio based shader that faded to red as it got further to the edge of the object. Here are some renders of the subsurface, transmission, and fresnel shaders.

I also rendered out the standard shadow and occlusion passes, but those aren’t too exciting.

Also this model can be found at threedscans.com. Lots of people used the Stanford models, but I like being able to use photogrammetry and I fell in love with these funky little lions! Aren’t they cute?

For this project, I also decided to use AOVs rather than manually splitting render layers, and I gotta say, I enjoy that a lot better. I prefer having all my layers in one file to render all at once, and I didn’t find it to be limiting as far as my control over what goes into the layers. But I know every artist prefers a different setup, and I’m happy to know both.

I rendered out a direct, indirect, specular, specular indirect, subsurface, subsurface albedo, and point passes. I of course didn’t render out the subsurface passes on my transmission shader, that would be pointless.

Well, actually, I didn’t end up rendering any AOVs on my transmission pass, purely on accident. I had removed AOVS to render out the shadow and fresnel masks on my home machine while the heavier subsurface pass rendered on the renderfarm, but had forgotten to turn it back on before I tossed the transmission shader on the farm. Luckily I didn’t find that it made a difference since all the specular data was in the subsurface and the transmission was such a small part of my final comp. Just thought you should be aware that this is why it is not in my final nuke script.

lionTree.JPG

All that was left was a little bit of color correcting to accommodate for the heavy yellow and blue tones in the scene, and a buildup! All done. I’m significantly happier with this composite than with my very first one, and that’s really exciting! That means I’m learning It was just so much fun to try and capture all the nuance of this setup from lighting to shader to comp.

I’m excited to see what the critique is for this project so I can push it over into the realm of photoreal!

Integrating Light and Shadow Process

 

For our first project in technical direction for compositing, we lit, rendered, and composited some objects into a scene using a match to live setup we created. Now, I’ll be the first to say, my knowledge of lighting is a bit sparse— I know some basics from film classes and our very generalist cg education here, but a match to live seemed a bit daunting! I also had no idea how to set up render layers prior to this project, so that was another hurdle I had in my way beginning this.

Here’s a short breakdown of my process, warts and all, while learning all of this! To keep this blog less cluttered, I’ll keep it all in this one post.

The first step here was the photography. My camera’s auto exposure bracketing wasn’t as advanced as I would have liked, but c’est la vie! I still got multiple exposures, just not as many as a typical HDR contains.

Here are the images I took.

I also took an HDR using a chrome ball at multiple exposures, but the final file is too big for squarespace. Above is just one exposure of the 7 I took. Don’t worry, I at least know that!

The next step was to match the perspective and lighting. This took me a while to do, since I was pretty picky about making sure the grid matched. Unfortunately I think a little bit of my sensor crop and therefore lens data was off, but that was noted and I’m implementing that change in the next project.

Here’s a little slideshow of the basic steps I took to set up my scene.

I also projected my HDR as a surface shader to get the diffuse light of the room! After this, I had to tackle render layers.

For me, render layers were the 8th wonder of the world; some kind of wizardry that lighters get by casting spells on their shaders, that then I get to composite later to make a nice final beauty pass and never question anything. Fortunately, the mysticism is broken for me now! I understand it! Although it took a lot of studying example files, taking notes, and sitting hunched over Lynda videos like an ancient monk over his transcripts.

Our first step in checking our render layers and lighting was to render out a grey ball to match the one we took pictures of- here is a comparison of my render to the grey ball! And also my render layer setup.

For this project we were not supposed to have any material that was too complex, so I chose to put some pumpkins in my scene! For my big gourd babies, I ended up rendering out about 7 layers, give or take. Some I didn’t end up using in the final comp.

And here I will preface that I needed one more layer that I didn’t actually render- a layer for ambient color! I rendered color out of the diffuse, which should have been saved for diffuse lighting and not diffuse color. At least I think. I’ll experiment and get back to you.

Here are all my labeled render passes of my pumpkins!

I’ll admit, by the time I had rendered a whole animation of these pumpkins with all my render layers, it got pretty dang close to the deadline. Therefore, I didn’t get to spend much time in Nuke at all; I even pulled a dumb move and rendered out my final as an mov instead of exrs and got some terrible scan lines! Yikes!!

Overall, I know I can do a lot better, but for my first time working with this process, it’s not terrible! The groundwork is there, it just needs that final polish to bring out all the work I put into it.

I’ll make a second post as I work on my resubmit for this project, but in the meantime, stay tuned for another breakdown of subsurface and reflectivity compositing!