Integrating Light and Shadow Process

 

For our first project in technical direction for compositing, we lit, rendered, and composited some objects into a scene using a match to live setup we created. Now, I’ll be the first to say, my knowledge of lighting is a bit sparse— I know some basics from film classes and our very generalist cg education here, but a match to live seemed a bit daunting! I also had no idea how to set up render layers prior to this project, so that was another hurdle I had in my way beginning this.

Here’s a short breakdown of my process, warts and all, while learning all of this! To keep this blog less cluttered, I’ll keep it all in this one post.

The first step here was the photography. My camera’s auto exposure bracketing wasn’t as advanced as I would have liked, but c’est la vie! I still got multiple exposures, just not as many as a typical HDR contains.

Here are the images I took.

I also took an HDR using a chrome ball at multiple exposures, but the final file is too big for squarespace. Above is just one exposure of the 7 I took. Don’t worry, I at least know that!

The next step was to match the perspective and lighting. This took me a while to do, since I was pretty picky about making sure the grid matched. Unfortunately I think a little bit of my sensor crop and therefore lens data was off, but that was noted and I’m implementing that change in the next project.

Here’s a little slideshow of the basic steps I took to set up my scene.

I also projected my HDR as a surface shader to get the diffuse light of the room! After this, I had to tackle render layers.

For me, render layers were the 8th wonder of the world; some kind of wizardry that lighters get by casting spells on their shaders, that then I get to composite later to make a nice final beauty pass and never question anything. Fortunately, the mysticism is broken for me now! I understand it! Although it took a lot of studying example files, taking notes, and sitting hunched over Lynda videos like an ancient monk over his transcripts.

Our first step in checking our render layers and lighting was to render out a grey ball to match the one we took pictures of- here is a comparison of my render to the grey ball! And also my render layer setup.

For this project we were not supposed to have any material that was too complex, so I chose to put some pumpkins in my scene! For my big gourd babies, I ended up rendering out about 7 layers, give or take. Some I didn’t end up using in the final comp.

And here I will preface that I needed one more layer that I didn’t actually render- a layer for ambient color! I rendered color out of the diffuse, which should have been saved for diffuse lighting and not diffuse color. At least I think. I’ll experiment and get back to you.

Here are all my labeled render passes of my pumpkins!

I’ll admit, by the time I had rendered a whole animation of these pumpkins with all my render layers, it got pretty dang close to the deadline. Therefore, I didn’t get to spend much time in Nuke at all; I even pulled a dumb move and rendered out my final as an mov instead of exrs and got some terrible scan lines! Yikes!!

Overall, I know I can do a lot better, but for my first time working with this process, it’s not terrible! The groundwork is there, it just needs that final polish to bring out all the work I put into it.

I’ll make a second post as I work on my resubmit for this project, but in the meantime, stay tuned for another breakdown of subsurface and reflectivity compositing!