Integrating With Animation and Motion Capture

 

For this final project, we were challenged to work with not only a moving camera, but an animated asset that we did not create. This was a lot more challenging than I thought it would be! I’m confident in my camera tracking abilities, but I struggled a lot with finding an asset that fit the plates that I shot.

I wanted to take more advantage of our location here in Savannah with my filming, so I went out to one of my favorite places- Bonaventure Cemetery. Don’t worry, strictly historical now! I’ma lot of things, but disrespectful to the dead is not one of them. But I hit it right at the golden hour with some beautiful shadows, and I gotta say, I’m pretty happy with how these plates turned out! It bugged me pretty severely that my plates were so bland in the last projects.

Here’s a still image I shot of my location.

IMG_4193.png

I also took my own 360 camera HDRs on location! There’s a few tourists in it, but that’s just funny more than anything else. (This is also just one exposure of my HDR.)

SAM_100_0182.jpg

The next step was to choose an animation! Well, technically we picked them out before shooting, but I ran into some issues I’ll discuss in a second. Initially, I used motion capture data I obtained from my classmate, Austin Wright. His work can be found at https://acdafx.wixsite.com/austinwrightvfx. While there was nothing wrong with the data itself, (nice, clean uvs that don’t jitter and good mocap data) the geometry it was designed for was just way too dissimilar to the object in my scene. The model needed a flat surface to lay his hand on since he is intended to be sitting on stairs, but the plate just doesn’t have any that won’t induce major clipping.

Here’s a few screencaps to show you what I mean.

So yeah, that’s a problem. Quite a few of my classmates used free animations from mixamo.com for this project, so that was where I figured I would have the most luck. I ran into similar problems with these animations as well- lots and lots of clipping, as well as just action that just mismatched the plate.

For instance, I had a draft in which the model crawls through the scene, but the motion was just far too quick and far too horizontal for this very diagonally shot plate. The animation I settled on was one of the model standing up, which fit the line of action of the plate perfectly. Granted I did have to reverse the footage, but that’s our secret! He still clips awfully in the hands and arms, but that’s a problem for another day, and sometimes as compositors we just get bad assets. C’est la vie!

Another thing worth mentioning for this project since I spent a day on it is that I muddled around in Substance Painter for these textures! I am by no means a texture artist or look developer/lighter- I’m familiar with Mari, the process of UVing, and how to paint textures, but there’s a reason I chose to focus on compositing. I’m not strong in texturing, and I find it uninteresting in the first place. Long story short, I cobbled together some aged copper textures and I’m pretty alright with them for a first timer in Substance! I did get feedback that it should have more blue tones in the oxidization though, and I agree.

Here’s the diffuse map I painted and a little render of my guy.

So, let’s dig into the Nuke aspects of this project! This was a pretty simple plate to track, although I ended up deleting way too many markers on the foreground and that made the end result a little floaty looking. That’s definitely something that’s bugging me a lot about this project, and the very first thing I want to fix about it.

It’s got an error level of about .5, but the data loss to get there was just a really bad decision on my part. Here’s what it looks like.

I also really wish I had more time to refine that shadow, it was challenging and quite frankly looks like garbage. There’s a big hot spot right in the middle of the space between the long headstone shadow and the shadow of the leaves that made creating a shadow plate incredibly difficult. The saturation would be fine in one area, but off in another, or the shadow detail would be perfect in one area and far too much in another. A roto would fix this pretty easily I think.

I did roto out the shadow pass in the shadowed areas to avoid the dreaded double shadow, and I think that worked pretty effectively! It’s just a garbage matte, so it’s making me itch that all the grass detail is lost, but time constraints definitely came into play with that.

I also used an edge noise gizmo to try and break up the edges of my perfectly symmetrical and flat shadow, but given time, I think I would have preferred to put a displacement on my ground plane proxy and rendered the shadow out with irregularities in the first place. Nuke is wonderful, but she just doesn’t have the z data to support doing something like that unless I rendered deep data. Even then, not sure a gizmo would support that.

Here’s a couple screencaps of my node tree as well! Same as last project, the AOV splitter is a tool developed by my classmate, Sasha Ouellet, whose work can be found at http://www.sashaouellet.com/. (And who also has released a MUCH more refined version since then, but I haven’t had the time to download it yet.) I also made the occlusion node setup into a toolset, cause I always forget how the setup works. Don’t get me wrong, I understand the merge math there, but I do want to save myself the time and just load it in.

Delirium: First Steps


Since I’m in my final year here at SCAD, I’ve been working my way through my senior studio class. We have two quarters to basically work on films and create works that show the culmination of all our studies here.

Quite honestly, I started off a little shakily. I love doing research and development, learning new techniques, and expanding my tool belt of random skills, but that came back to bite me: rather than showing off what I can do, I attempted to tackle everything I don’t know how to do.

I’ll make a post at a later date explaining some of the photogrammetry and compositing exercises I worked on, but for now, let me focus on my new baby project!

I am a big horror fan- the way we scare each other is such a complex and nuanced science that I’m always trying to understand and utilize. I’ve looked to the modern masters like Junji Ito, Hideo Kojima, and Guillermo del Toro as well as a host of others to study the fine line between horrifying and hokey.

As a compositor with a love for film, naturally I wanted to explore my own narrative capabilities using all my passions. So, long story short, I began work on Delirium. I chose to make it a 360 film, because I felt as if it was a medium that was so underutilized for horror and wanted to change that. And naturally, the position of being stuck as a watcher led me to theme the film around sleep paralysis.

Here is my pitch bible for Delirium, which includes a “script” detailing the action, initial storyboards, and lots of inspiration.

Currently I am in the process of gathering all the assets - including some practical effects!- that will be used in the film.

Compositing Subsurface and Transmissive Shaders

 

For our second project, we studied the refraction of light through different gemstones. Everyone in the class had a different stone form amethyst to selenite to tiger’s eye, each one presenting a different challenge in replicating its look!

The gemstone I picked was a perfectly polished rose quartz, with some pretty ribbons of different shades of quartz inside of it.

As with the first project and any match to live task, we had to take pictures with our integration kit! Two of my friends and I got together to shoot some small still lives to put our models in. We were allowed to take video for this project if we were familiar with tracking, but I chose not to so that I could focus more on the aspects I didn’t know how to do. I have plenty of experience with a lot of difficult tracks anyways!

Here are the images I used for this project, minus the HDR. Pictured is a single exposure.

Again I’m not particularly thrilled with the lighting of this setup, it’s not particularly dynamic. I really need to get more exploratory with my photography! I also tend to have a pretty light hand in everything from drawing to setting up lighting, so sometimes images I make come out a bit flat like this. It’s something I acknowledge and am trying to work on!

Anyways, here’s my grey ball render in comparison to the real world grey ball.

So, setting up this lighting was a bit of a challenge for me. The HDR did an alright portion of the color and lights, but the windows behind and the string lights had a difficult time coming through in the indirect lighting. So, I tossed in two area lights with blue tones where the windows would be and a yellow toned light that acted as the key light. I also toned the plane the ball sat on to be the same yellow as the table runner to get that yellow bounce light on the underside of the ball.

The shadows are pretty soft yet defined in this image, and avoiding a long stripe across the ball from the spot light was difficult. I counterbalanced that by allowing the indirect light to make up for the softness of the light. I also kept the shadow of the ball relatively defined like the reference plate. Looking at it now I see a bit of a double shadow, but where the vases in the left corner cast shadows the same phenomenon happens. The underside of that ruffle just amplifies the shadow stretching over it.

I also figured out the occlusion!!! I was really excited about that! I was essentially missing a step last time- rather than having a color corrected full shadow plate with an additional mask of correction in the occlusion zone, I was only multiplying my shadow by the mask of the occlusion which just shrunk the shadow. But I get it now!! Two color corrections. Although this might be a bit darkness on the occlusion.

Here’s my node tree for that.

I also wanted to work on my node tree layout. I try to keep a pretty linear workflow, but I wasn’t sure how to properly lay out my cg comps. After studying a few of Nuke’s example files, I think this is a little closer. Still not perfect by any means, but much neater than my workflow before. I tend to sometimes follow my own internal logic of B PIPE DOWN NO DIAGONAL LINES EVER too strictly sometimes. What can I say, I’m a neurotic organizer.

So, the shaders! Phew, we’re getting there, I promise. Long post, stick with me!

I tried a few different methods of balancing the subsurface and transmissive properties. I started with a mix shader that relied heavily on subsurface since rose quartz isn’t particularly clear. To get the ribbons of white and pink quartz, I used a procedural noise/marble texture in Maya. I stretched and edited it to look similar to the lines in the stone, and connected a ramp to the color to get all the gradations. I’m pretty alright with the result!

To get the clear shininess of the outer layer, I ended up rendering out a separate transmission pass that looked like a hazy pink glass. I then rendered out a Fresnel mask by making a facing ratio based shader that faded to red as it got further to the edge of the object. Here are some renders of the subsurface, transmission, and fresnel shaders.

I also rendered out the standard shadow and occlusion passes, but those aren’t too exciting.

Also this model can be found at threedscans.com. Lots of people used the Stanford models, but I like being able to use photogrammetry and I fell in love with these funky little lions! Aren’t they cute?

For this project, I also decided to use AOVs rather than manually splitting render layers, and I gotta say, I enjoy that a lot better. I prefer having all my layers in one file to render all at once, and I didn’t find it to be limiting as far as my control over what goes into the layers. But I know every artist prefers a different setup, and I’m happy to know both.

I rendered out a direct, indirect, specular, specular indirect, subsurface, subsurface albedo, and point passes. I of course didn’t render out the subsurface passes on my transmission shader, that would be pointless.

Well, actually, I didn’t end up rendering any AOVs on my transmission pass, purely on accident. I had removed AOVS to render out the shadow and fresnel masks on my home machine while the heavier subsurface pass rendered on the renderfarm, but had forgotten to turn it back on before I tossed the transmission shader on the farm. Luckily I didn’t find that it made a difference since all the specular data was in the subsurface and the transmission was such a small part of my final comp. Just thought you should be aware that this is why it is not in my final nuke script.

lionTree.JPG

All that was left was a little bit of color correcting to accommodate for the heavy yellow and blue tones in the scene, and a buildup! All done. I’m significantly happier with this composite than with my very first one, and that’s really exciting! That means I’m learning It was just so much fun to try and capture all the nuance of this setup from lighting to shader to comp.

I’m excited to see what the critique is for this project so I can push it over into the realm of photoreal!

Integrating Light and Shadow Process

 

For our first project in technical direction for compositing, we lit, rendered, and composited some objects into a scene using a match to live setup we created. Now, I’ll be the first to say, my knowledge of lighting is a bit sparse— I know some basics from film classes and our very generalist cg education here, but a match to live seemed a bit daunting! I also had no idea how to set up render layers prior to this project, so that was another hurdle I had in my way beginning this.

Here’s a short breakdown of my process, warts and all, while learning all of this! To keep this blog less cluttered, I’ll keep it all in this one post.

The first step here was the photography. My camera’s auto exposure bracketing wasn’t as advanced as I would have liked, but c’est la vie! I still got multiple exposures, just not as many as a typical HDR contains.

Here are the images I took.

I also took an HDR using a chrome ball at multiple exposures, but the final file is too big for squarespace. Above is just one exposure of the 7 I took. Don’t worry, I at least know that!

The next step was to match the perspective and lighting. This took me a while to do, since I was pretty picky about making sure the grid matched. Unfortunately I think a little bit of my sensor crop and therefore lens data was off, but that was noted and I’m implementing that change in the next project.

Here’s a little slideshow of the basic steps I took to set up my scene.

I also projected my HDR as a surface shader to get the diffuse light of the room! After this, I had to tackle render layers.

For me, render layers were the 8th wonder of the world; some kind of wizardry that lighters get by casting spells on their shaders, that then I get to composite later to make a nice final beauty pass and never question anything. Fortunately, the mysticism is broken for me now! I understand it! Although it took a lot of studying example files, taking notes, and sitting hunched over Lynda videos like an ancient monk over his transcripts.

Our first step in checking our render layers and lighting was to render out a grey ball to match the one we took pictures of- here is a comparison of my render to the grey ball! And also my render layer setup.

For this project we were not supposed to have any material that was too complex, so I chose to put some pumpkins in my scene! For my big gourd babies, I ended up rendering out about 7 layers, give or take. Some I didn’t end up using in the final comp.

And here I will preface that I needed one more layer that I didn’t actually render- a layer for ambient color! I rendered color out of the diffuse, which should have been saved for diffuse lighting and not diffuse color. At least I think. I’ll experiment and get back to you.

Here are all my labeled render passes of my pumpkins!

I’ll admit, by the time I had rendered a whole animation of these pumpkins with all my render layers, it got pretty dang close to the deadline. Therefore, I didn’t get to spend much time in Nuke at all; I even pulled a dumb move and rendered out my final as an mov instead of exrs and got some terrible scan lines! Yikes!!

Overall, I know I can do a lot better, but for my first time working with this process, it’s not terrible! The groundwork is there, it just needs that final polish to bring out all the work I put into it.

I’ll make a second post as I work on my resubmit for this project, but in the meantime, stay tuned for another breakdown of subsurface and reflectivity compositing!

The Journey, an Animated Film

 
Capture.JPG

Another project I spent a long time on this quarter was called The Journey, which I did the matte paintings for. Admittedly, it was not my strongest work, but I did learn a lot about the compositing for animation workflow and implementing matte paintings with other footage.

For this, I worked in Vue to maintain the CG look of the film, but even that was almost too realistic. It took a lot of Photoshop processing to cut down on details. I also worked with one other person on this as well, who ended up switching majors mid-project and leaving a lot of shots to me.

Here are a few stills from the project- personally, I don't feel like they require as much explanation, as they are far less technical and more simple composites.

Also, here's the final film if you'd like to watch it. The password is "finallydone".

The Yellow Wallpaper Post-Production

 
preview.jpeg

This has, by far, been the most challenging task as a compositor that I have ever faced. Granted, I have only been a compositor for a year or so, but I have learned more in the past two and a half weeks than I have in months about Nuke, 3D tracking, rotoscoping, proper collaborative script organization, and cleanup work in general.

Here are a few of the highlights and challenging shots I personally worked on- myself and one junior compositor were able to complete 23+ shots in under 3 weeks. Some of these shots were ten frames, but each shot brought its own unique challenges and problems to the table, and very rarely was it standard to pop down a few trackers and quickly place the wallpaper on the walls properly.

If you'll recall my last post on this film, we were unable to use a greenscreen to cover the bare walls. Bad, bad call. Dear god, bad call. The roto and luma keying was miserable and added days and days to the process. This particular shot was not awful to do, just time consuming due to the 1,000+ frames it boasts. It took me two days of rotoscoping and a day of color matching and refining to truly finish.

Here is the basic setup. This shot has a relatively simple layout- a quick garbage matte to determine the location of the edge detail key, a core matte featuring articulated roto shapes for the man moving in and out of frame, and some color grading of the wallpaper itself to match to the lighting. There is not tracking in this scene.

Here is another scene, actually the first I tackled. Since I was able to get a 3D track, I set up a projection to place the wallpaper and have it follow the highly visible trackers with ease. This scene also features articulated roto shapes and intense color matching- the wallpaper is hardly visible by the end, but this is what it would look like in this lighting

This scene in particular highlights the tracking problems we faced in many of these shots, particularly the very short ones. Because of the soft focus and steadicam jerkiness, tracking markers became impossible to use. So, in this setup, I used two camera tracker nodes, one that solves for a quick, blurry movement and one for the slow, languid movement shortly after. By switching between those nodes and two versions of the projection, I was able to Macgyver together a wallpaper that stuck to the walls. 

I also made a concerted effort to keep my scripts clean and legible in this project, following a specific and personalized visual language in which A pipelines are always easy to pick out and rotos are easy and simple to find. Nodes are also labelled, particularly those that are used to create alphas for the actors.

Overall, I know there's more that can be done to further refine these shots, but I'm satisfied with how much I've learned and how hard I have worked. Yay!

Practicing Cleanup

One of the fundamentals of compositing that I've been missing, especially for a junior compositor, is the workflow for cleanup and rotopaint. We have a limited amount of compositing classes here at SCAD, so I am currently working on two films (apart from The Yellow Wallpaper) that require cleanup work. Here I'll show a few shots and break down how I accomplished them. 

This film, a short fan film, was shot in front of a library. Most of the cleanup work was making the library seem like a museum instead- removing signs, etc. I also did a sky replacement. Here's a few comparisons and simple node trees.

This shot was simple- a short pan movement with plenty of good tracking points, and a simple paint-out.

For this shot, I pulled  a luma key to maintain the fine details of the Spanish moss and telephone wires against the thankfully very light and monotone sky. For the building, due to its boxiness, I just did a quick roto. There is a slight camera move, so I tracked and matchmoved the roto so it would stick.

The Yellow Wallpaper Preproduction and Production

 
bumper.jpg

More adventures in VFX supervision! Currently, I am the supervisor for a senior thesis film titled The Yellow Wallpaper, directed by Amberly McMahon. When I was put on to this project, the directors, producers, and DP had never worked with VSFX, so they weren't entirely sure what needed to be done for this project. If you're familiar with the story of The Yellow Wallpaper, a woman forcibly placed in isolation begins to hallucinate a woman in the wallpaper. It's a wonderful short story with deep feminist tones, so I was excited to work on it! Here is the script and shot list I was given.

Click through to read them.

So, right off the bat, we have a number of challenges- adding a 2D element into a live action scene, and getting it to match the camera. Of course, As production began we had another large issue that needed to be addressed, but I'll touch on that later. 

Firstly, I roughed out a shoot document before meeting with the director, DP, and production design for the first time. Here are my initial rough thumbnails as well as some meeting notes I took on process.

Pardon my chicken scratch- let me translate. essentially, I jotted down a workflow in Nuke that involved tracking the wallpaper, making that plane the front of a 3D scene, and making each layer of the wallpaper its own card, much like in matte painting to set up faux distance that way.

I talked over this in the first meeting with the crew, and they agreed. However, due to some budget restrictions, we had some problems. We did not have enough funds to add a roof or to print a whole room's worth of wallpaper. Seeing as set extension isn't terrible to do, I offered to spend some more time in post to add it in. It's not an ideal situation, but one that we can definitely work with!

So, here are a few behind the scenes photos about my tracking setup, which I'll explain further in a moment.

So the no roof worked out, and although I haven't received a rough cut yet, I do not think the ceiling is in any of the shots. So, I chose simple circle trackers, just to add some high contrast points and have a track-able shape. I decided to forego typical complex trackers, because I believed that some of the more intense handheld camera movements would be better tracked with a simple circle outline to emphasize motion distortion. 

Now you may be wondering, wow Kat, what the hell, you absolutely should have made those walls blue, that's going to make for some long weeks of roto! And yeah, you're right, I probably should have, but in speaking with our grips and DP, blue spill would have made the delicate lighting setup impossible to do. So, my plan is to do some rough cleanup of the tracking markers, and pull luma keys for shadows and fine detail. It's not the ideal situation, but definitely is a workflow that I believe will work. 

And hey, it's baby's first big film! It's a learning experience, and now I know to consider this next time I run VFX for a set. 

Lastly, I needed to have a shadow morph at the end, so at that point I did use a bluescreen (blue, not green, because this film is so yellow. Look at it and tell me green was the right choice.) to get a key of her shadow to comp back in later. With the original master lighting, we could not get one on the wall, so we had to set up a new placement of lights so we could get one. It's not intended to be a photoreal moment anyways, just an intense, dramatic one. 

Also we shot some lens distortion, and were lucky enough to shoot on one of the school's RED cameras, not the FS7 like originally planned. I don't have any images for that, but it was something I planned from the beginning.

So there! Currently, my team and I are waiting on the rough cut to be delivered to us to start working. Definitely a fun experience, and one that has taught me so much!

VFX Supervision

 
rcam2GS.PNG

Last quarter I took a class on the process of preproduction for postproduction. Our goal was to previsualize and document all the information needed to shoot a miniature set in addition to live action actors. I was fortunate enough to learn this from the supervisor for What Dreams May Come, which utilized miniatures in a stunning way. 

Here I'll upload my documentation of the process- everything from camera triangulation to a compositing guide. Click through for full resolution.

More Photobashing and Projection

Between learning Vue, I worked on this little photobash in my spare time! I love the legend of Baba Yaga, the Russian witch with a house on chicken legs who flies around in a cooking pot. So, after some value mapped thumbnails, I just hammered this out! I looked a lot at Ansel Adams for inspiration for my value mapping, because his works are artfully burned and dodged to have a gorgeous array of values. Here's a select few I looked at for inspiration.

I wanted to practice more with color grading in Nuke because the workflow is far more intuitive and simple for me vs Photoshop, so once the basic shaping and some work on the house were done, I took it into Nuke. Here's some images from my photobashing process, including my ungraded image.

Once my typical projection was set up, I added some grain and chromatic aberration on top of it just for fun. I think really the only thing this is missing for me is some movement. I want to add some ambient breeze to the trees and movement to the water, but for time's sake I had to forego this.

Here's my final node setup.

Vue to Nuke Workflow

 
crashTest02.jpg

One of the challenges of working in Vue is that the geo gets very, very heavy very quickly. My floating islands clocked in at about an hour for that single frame, not including post-processing, which poses a problem to any animation I'd want to render. Of course in a production setting this wouldn't be an issue, but for the same of time and playing with multiple programs, I came up with this.

By baking objects to polygons and setting them to object render layers, Vue rendered out alpha channels for all of my geometry that was in-camera.

By selecting these alpha channels in Photoshop and placing each selection on a different layer, I got a basic Nuke card projection setup. By setting all my camera and card settings to the same aperture size and focal length, I definitely sped up this process.

Once this was done, all I had to do was deform the cards to get a more realistic parallax, add some additional elements like smoke and scorch marks, and the work was ready to go! 

Overall, I definitely feel like I have a solid grasp on the ins and outs of Vue now, as well as a deep understanding of Nuke's card projection setup, which I have been using longer. There are many, many workflows, processes, and looks that matte painting can accomplish, and I feel like I have a good understanding of most of them for a beginner.

I'm definitely looking forward to continuing my studies and practice in matte painting!

Learning Vue!

 

Floating islands created in Vue. Click to enlarge.

Essentially, the last half of our matte painting course was left to us to research, learn, and refine as many methods of digital matte painting as we liked, so that we could have time to ask professors and peers for help and suggestions in class. Personally, I wanted to learn more environment generation through softwares such as Terragen, Worldbuilder, and Vue.

This particular matte painting was done in Vue! This program seemed the most like what I'm used to working in via 3D programs, and had a lot of interesting functions to play with. It felt like playing Zoo Tycoon as a kid, placing random plants everywhere and making giant holes for no reason! Too fun, it was hardly a project at all!

But, regardless- this was my process. 

I had a bit of a learning curve with Vue, especially withing the rendering system. It's not a renderer I'm familiar with, and it applies several odd exposure filters. I'm still not entirely happy with the overall exposure of this, but I did edit it post-render. 

My first attempts, with a different concept entirely (alien spaceship crash on a farm), turned out dark and one-dimensional. Over half this image is unreadable and the sky is overexposed, which is no good. Wasted pixels!!

farmCrash_2.jpg

I will be continuing this project in Vue, so expect a second post on this. I intend to develop a Vue to Nuke workflow for myself.

So, feeling frustrated and unsatisfied with this result, I did some digging, watched some tutorials, and made a new concept that did not rely on obj imports. Here is some of my scene setup.

So, essentially, the floating islands are two small terrains flipped and stacked on top of one another. I used the terrain editor to get the shape I wanted roughly, and then moved each around in the world. On top of each, I spawned pre-collected ecosystems of trees, and placed native procedural textures on the rest of the surfaces. Here is the result before my post-processing.

Vue's Final Render of the Scene

However, I did render this image out as an HDR to keep the detail under the overblown lighting. Bit by bit I adjusted the levels in Photoshop, cleaned up a few missed floating trees, and overall darkened the image. Here's a comparison of the two images.

Overall, through a few weeks of frustrating experimentation, I think I definitely have enough of a handle on Vue to use it professionally should I be asked to. Despite its highs and lows, this particular matte painting was a ton of fun to do, and I look forward to using Vue again on other projects! If I could just get that renderer to produce images that don't need editing............

Photogrammetry

One component of my matte painting class included the implementation of photogrammetry, so we are learning its workflow and process! I'm really excited to develop this, because it's a fascinating field to me, and I adore learning about new processes in vfx. That's my favorite part about what I study, actually! There's always something new to learn. But, Here's a bit on my learning process. Thanks to my good bud Dierdre for the use of croconana.

IMG_3653.JPG

My first time around I took some pretty bad, incorrectly focused photos, which resulted in the world's holiest mesh, and not in a good way. Half the poor thing was missing! I didn't even bother to take it from point cloud into mesh, because half my photos were rejected by the program. Unfortunately, I didn't save it, so let's move on. Just know I did not succeed at first.

Here's the mesh with retaken photos with the right focal depth (at least f8, people), which got a much better point cloud! And none of the 52 images were rejected by Photoscan.

Capture.PNG

After some cleanup, here's the final mesh and a textured preview before exporting, retopologizing, UV mapping, and texturing. Pretty nifty!!

After all that beautiful program roundtripping, here's a little turntable of croconana! What a handsome young man. Obviously he has some MAJOR holes underneath him, and some holes around the tag, but overall not a terrible first try, especially for such a light colored object! I could have fixed those in Maya,
Zbrush, or Topogun, but this is only a side project between classes and films. My next attempts will have a bit more time put into them.

croconana_2-iloveimg-compressed.gif

Render Reworking

A while back I had a project that I just couldn't quite seem to get right. I had painted and edited a dozen or so textures that I was excited and proud to show off, being more of a 2d than 3d effects artist. But, I just couldn't quite get the lighting right, or the camera angle. 

Part of this, I expect, was my inexperience with lighting and rendering, but part of my lack of inspiration definitely stemmed from a tumultuous era of mental health. (Side note: seek help. If you think you need it, if you're overworked, sad, feeling empty, and disinterested, please seek help. I did, and my work and work ethic have improved dramatically. You cannot make good art when you are sad, contrary to popular belief.)

So, I decided to revisit this project, which was based on some personal belongings of the main character in my favorite book series! (The Kingkiller Series. Fantastic novels.) 

In re-approaching this, I knew that the lighting was the first thing that had to go. I had this beautiful red lantern that was just sitting there, and it could give some amazing atmospheric light! So I amped up that point light, and tossed in a blue-toned fill light to counterbalance it and show off some more detail on the lute. I also threw in a volumetric and added depth of field to add more visual interest. Here's my lighting setup.

Capture.JPG

Here are a few of my painted textures as well. Pardon the tearing on them, they are quite old at this point.

Here are also a few of my node breakdowns, showing all my maps. I hand-generated everything from specular to roughness to bump. Of Course this isn't nearly all of them, but it's a general look at my setup.

So, you can see why I wanted to try and save this. I'll post an update when the render is done! Currently, it's clocking in at about a half hour a frame, but that includes some DoF, volumetrics, and high diffuse sampling.

Projected Matte Painting!

For this assignment, we moved our photobashes into Nuke! This was something I have wanted to do or a while, because we touched on it in my compositing classes, but never went deeply into Nuke's projection system. We were meant to play around with 2.5D and parallax! 

My first photobash for this project.

My first photobash for this project.

For my first attempt, I built a small mountain village based on a town from a Dungeons and Dragons campaign I run. Terribly geeky, I know. It was a decent photobash, but I could have done better. It was a good example to start with though, in my opinion. Once I had that done and my adjustment layers were collapsed, I re-expanded the file in Nuke! Here's what it looked like. 

nodes.PNG

Kind of hectic, I know. It scared me at first, too. It's really not bad! Each of these was a layer in Photoshop, aka why I compressed all my adjustment layers. Next step is to take them into 3D space on their own individual cards. With some organization, TLC, and some guidance from my compositing professor, here is my final node tree. I'll explain what each of these means soon.

yeahboy.PNG

For example, my sky layer! Once it was premultiplied to exactly what was in the layer, I placed it on a card, which in turn was placed a certain distance away from the camera in a scene. There's a few more steps here and there, like displacing certain cards with roto'd alphas to match the shape of the objects on their layer or adding an axis, but that is the general setup for every card. Here is the 3D scene!

projection.PNG

It's not the most accurate representation of scale, but for a first try at projection, it's not awful. You can also pretty noticeably see a misaligned card in the render, because my distortions put it off-center. Whoops. Well, again, a first try. I did another one of these.

After setting up all my cards, I used an iDistort to make the grass move a little in the wind! (That's a whole other process, but essentially involves animating noise in the red and green channels) But again, the misaligned cards make it slide like a bad track. If I redid this, that's the biggest thing I would fix. 

But hey, what's only one try?? I felt like I could do a lot better, so I did it all again. This time, my subject was an underwater city! And my adjustment layers were all done in Nuke, which I feel gave a much better result.

Unfortunately, some of my Photoshop filters didn't transfer quite right. I had a much more refined caustics layer with the correct perspective, but static caustics look weird, and I wanted the realism. So I downloaded a gizmo, and tweaked it to the best of my ability. I followed the same process, and here is my final node tree and my 3D scene!

nodes2.PNG
swimmin.PNG

A little more streamlined! And a little better lit, composed, etc. I'm a lot more proud of this matte painting than the other. Maybe I should add some fish!!

Summer to Winter

 

The point of this assignment was to study changes in color, light, and natural elements between seasonal extremes. My original image is a composite of about ten different images to complicate and add texture to a simplistic mountain scene. Here, you can see the differences between the original and my photobash.

Personally, I had never attempted a photobash at all before this! So this was a new adventure for me. I don't think it's bad for a first time! With some more foreground and background elements, a simple, elegant photo gained some leading lines and intrigue to guide the eye throughout the image. Just for fun, here are a few of the images I used!

The next step was to make the scene into winter. After some desaturation and the addition of fog, I selected the red channel of a few foreground elements and overlaid a solid white texture to emulate a light layer of snow. This process however, did not look quite right on the trees, as it took out all the beautiful green that makes them an evergreen! Instead, I used blending modes, like my day/night conversion, and only added white to the highlights of the tree. Here's a comparison of the different effects!

I personally think the right side gives a much frostier effect and preserves the deep olive tones in the tree better. Otherwise, the tree blends into the background and gets rid of a lot of contrast. 

So, overall, I'm pretty happy with the results of this project! I think I will be taking this project into Nuke and adding some parallax to the background and all the layers in this. 

winter.jpg
summer.jpg

Day to Night Edit

(click the image to swap between day and night)

With this project, I mainly wanted to work with completely erasing all sources of light and rebuilding them. After a simple sky replacement where I added a few clouds, stars, light pollution, and color/value adjustment, I started by tackling the neon signs following this tutorial. It follows a method of color overlays and glow through blending options, and explains it far better than I could, but here is a sample of how I did it.

For instance, take this area of neon signs.

Capture.PNG

Here is what it looks like color corrected, with no overlays, glow, or ambient light.

Capture2.PNG

Now, here's how I achieved the effect. Firstly, I selected the signs in the ungraded image with their true, brightest colors. This already makes them begin to glow and stand out, as if lit from within, but they are only lit with sun.

Capture3.PNG

Next, I added a glow to each sign since light does not exist in a vacuum, and an additional color overlay to some signs because their colors were not as bright as I had hoped.

Capture4.PNG

I also realized at this point I had missed a sign. Whoops! But, this does not quite reflect the way light actually behaves in a night environment: it bounces, refracts, and reflects in a much less gradated way. So, following the aforementioned tutorial, I used the brush tool to mark out where light would fall, and then simply used blending modes to extract it from the shadows of the underlying image. With some refining, here is what I ended up with.

Capture5.PNG
(An example of the blending options. This in particular was used on the middle red sign.)

(An example of the blending options. This in particular was used on the middle red sign.)

And voila! From day to night in no time. Here's a comparison between the simple grading and the version with lights in it.

For the remainder of the image, I used this process on other signs, street lamps, cars, and a few phones. I believe i was most successful in the signs; the trails from the headlights don't seem quite accurate, but I'm unsure of how else to get the desired effect. With more research, I may return to this. But, overall, it was a good exercise, and I feel like I am improving overall and learning! With some more practice and effort, my skills should be refined.

Sky Replacement

 

For this assignment, most of the challenges I faced were in matching the colors and tones of the skies to the images, helping them seem less retouched. I feel that I did this more successfully on the second image (temple) than the first (skyline). The strong tone of the city skyline made it difficult to convincingly alter to a sunset color palette. If I redid this, I would perhaps try to pick a more contrasted image in the first place to match the lighting conditions of sunset. 

In the second image, I faced the challenge of reflections. After flipping, multiplying, and blurring the sky, I found a simple black and white ripple pattern that I overlaid onto the un-waved parts of the water, and multiplied it over the reflective layer. If i spent a bit more time refining this, I feel that it would have had a nicer effect, but upon a first viewing it helps maintain the illusion I desired.