The results of my practice with quadrupeds so far.
I had originally intended to make Noe ride the panther all the way, however I realized it drew my eyes away from the quadruped, which kind of defeats the purposes behind the demo.
The results of my practice with quadrupeds so far.
I had originally intended to make Noe ride the panther all the way, however I realized it drew my eyes away from the quadruped, which kind of defeats the purposes behind the demo.
My 2017 demo, A Storm in a Teacup, is out!
Last year I put the focus on technical stuff: Bifrost water, nHair, rigging. Although it was a great learning experience, it fell short on a visually appealing standpoint. This year I made it my mission to make it entertaining for someone who’s not in my head to know these characters (aka everyone else). I decided to put the focus on animation!
I’ll be reviewing how I got to complete this demo.
When I think of something easily eye-catching, I immediately think of action sequences. However if I’m going to spent months working on something it as to be on something I love. I want to convey a story. Ideally I want to build on the world I’ve been working on for a while now. In the early stages I titled that story “Procella” which is latin for “stormy sea” because saying things in latin makes you sound smarter right? Yeah… I changed that for Fishpants. Short, sweet, rolls off the tongue, ladies love it… They will.
Which leads me to the tone I want to give my action sequence. There are two tones you can give a good action sequence: Dramatic and Comedic. As drama without build up and care for the characters always fall flat (I don’t have a counter example so I’ll use the word always for now), I decided to go for the comedic approach.
Now. How do I make a comedic action sequence fit in my universe? To those that don’t know much about Fishpants; what is left of humanity has been living in underwater cities for the last 20 years. You don’t get to “play outside” much. So what is bound to get pretty universally popular? Video games. Only when you’re making do to survive I bet making video games might not exactly be a booming industry. Noe is… well without going too much into it, let’s say she’s a pariah in need of a popularity boost. She’s also one of those fantasy tech characters that can build anything given the means. So why not make one?
And there we go. Fighting game parody. This also has the added perk of letting me do over the top fighting, which let’s be honest: Is pretty damn entertaining.
Storyboarding is something I had foregone last year. Mistakes were made. It is such a crucial step. At first glance you might think it’s a waste of time and/or restraining. What it really does is saving you a lot of time and letting you focus on the important stuff. Never going to see that side of the room? Don’t model it. The character is not in the shot? Don’t animate it. You end up thinking something would be better done a different way? Change it. The storyboard isn’t made of stone.
It also lets you control the length and pacing. You know where a shot ends up on the scale of the entire animation.
I hand-drew my storyboard. Paper might not be ecologically friendly, but it doesn’t run out of battery when you carry it along to work on it whenever you have a chance.
I’ll freely admit I spent waaaay more time on the characters than the environment. You have to choose your battles. My characters were essential, my environment wasn’t. Also having good geometry on my characters makes it so much easier to rig. It’s just a good time investment.
I did most of the modeling in Maya, fine-tuned some in Zbrush and redid the characters and clothes’ topology in Topogun.
I had intended on using lots of royalty free assets from the web to clutter the rooms up without spending time I’d rather put on something else, but they cranked my polycounts too high and lengthened my renders too much for what I was aiming for. Sorry if that makes it feel like the room they’re fighting in is an empty can instead of the junk dumpster I had first envisioned.
A notable difference from last year: I modeled the hair as geometry. I ended up not using the nHair animation I spent a lot of time on last time, so I decided not to go that route this year. Saves a lot of render time too.
Oh. And if anyone can tell what I used as reference for Noe’s work-out outfit, color me impressed. Tell you what I’ll invite you to dinner, just send me an email (you can find it in the About tab).
I used Mari again this year for the characters. It is an amazing software that lets you paint seamless textures directly on your geometry once you get the hang of it. Although it is way too expensive if you’re on your own, there is a free version you can use as long as you don’t need too high of a resolution.
It is to be noted that for best result you should do your UVs differently as you would for a game asset. Basically it is less about optimizing your space and more about making them as flat as possible while keeping the proportions they have on the actual geo.
If you do it right you can apply a tileable skin texture as a base and go fix the seams on your model. I also use skin scans for pores and wrinkles painting. I didn’t go as far as painting blood vessels and the nuances between the different sub scattering maps as I did last year for several reasons. First and most important: I had to pump my renders to ridiculous levels to see the difference last year, which was okay for stills but utterly useless for the actual animation.
Actually the best textures I got in the demo are Noe’s eyes as far as I’m concerned. Which is fine as a stylized look is pretty much the best I can achieve with realistic render times.
Time to take the bull by the horns! How can I optimize the ratio Rig quality/time invested? Let’s remember that my focus this time is animation. I checked a couple of Rig scripts and came to the conclusion that using one to get the skeleton rig out of the way sped things and depending on the script even made the quality go up as long as you don’t depend on it for everything (A rig script is a tool, not a magic wand.).
I ended up using the Advanced Skeleton Script to get the character skeletons, even tried the facial one for John (That was a mistake, but eh… worth trying I guess). I redid the weight painting manually (automated processes rarely gets things right). Then did Noe’s facial rig and connected it to the skeleton. I used the Shapes plugin for sculpting the body deformations and Zbrush for the facial blend shapes.
I encountered a bug with the way I rigged my Eyelids. I used Marco Giordano’s technique (you can find his stuff here: https://vimeo.com/66583205. It’s my favorite eyelid setup. Gives you incredible control over it while making nice blinks and follow-along eye direction. I’m saddened that I’ll probably have to find something else next time. Basically when you move the character around too much, it breaks the blink function. You can emulate it by grabbing the controllers and doing it manually, but it kind of defeats the purpose. I had found a fix for it last year, but it only lessen the issue. It’s fine if you move the character a little bit, not if you have her do acrobatics. If anyone knows any good ones I’m all ears. Preferably one that is not dependent on a plug-in as it can cause problems if working with a render farm.
Besides John’s face being too stiff for my tastes and Noe’s eyelid bug, the rigs came out pretty well. I’d probably do a FK/IK switch positioning script next time though. It didn’t seem like a big deal at first, but I would definitely have saved time on the long run.
I took references from Kung Fu movies, video games (dah… or should I say DOA?), acrobats, a rifle shooting range, a bo staff instructor and of course yours truly. Taking video reference is the best excuse to fight invisible enemies as an adult.
As an experiment I started out trying to keep the animation rhythm on a 120 tempo (I’m working on 30 fps which makes it 15 frames per beat) to make it easier for music to go along with it (I already knew I’d have Martin work on that.), but ended up dismissing that rule once it got in the way of good pacing. As an example, in retrospect Noe’s first dash would have looked better if I hadn’t bothered with that. Again, it’s a learning experience. It’s always worth giving it a shot.
One of those experiments that turned out good are the slow motion shots. If you’re wondering, I slow-motioned my own movements to set the timing when shooting my video references, there was no post-processing involved there.
Animation is just so much fun. I really had a blast.
In the fighting stage, most of the lighting came from area lights on the ceiling. You can see them in that one shot where Noe lands in the middle of them in the finale.
The other lights are small spot lights linked only to the eyes of the characters. In some points it makes them too bright, but without them they were way too dark most of the time. I believe it has something to do with how I setup my geo and materials for the cornea and the eye. The shadows become overwhelming where there’s no light going directly in them. It was cheaper render wise to do it that way than to add bounces for the entire room.
In Noe’s room, contrarily as in the teaser, I had a single light source which I gave a cold tint to give the feeling they were lit up by the screen they’re playing on. It came out okay.
Martin was in charge of the music this year too. You can check out his stuff over here (https://www.youtube.com/channel/UC4pwSInr5q9frXk0jiAqFvw). He went for a style reminiscent of the 80’s to mesh with the light hearted tone of the animation. It’s a fight but no one is actually getting hurt no matter what the voice acting might let you think.
Speaking of voice acting. Christine Slagman took on the role of Noe in this demo. Her voice acting is top notch. You can find her stuff here (https://www.youtube.com/channel/UCc_R2OPK8CUHOeiqm8I0x4g). On top of the dialogue she sent me a plethora of combat grunts to work with.
I voiced John and the Belefrog. Because I can. And it makes me giggle inside.
Martin also took care of editing the voices in the dialogue so they mesh with one another and with the music.
All that was left was creating and using a library of sound effects for the hits, firearm and mechanic sounds. I recorded most of them myself by hitting in a taekwondo training shield and a frying pan, by dropping metaling objects or falling to the ground. The only sounds I took from royalty free sources on the web were the laser shot, the weapon transformation and the sick gooey stab sound.
Hello Darkness my old friend…
No seriously. It wasn’t a too much of a hassle this time around. I had planned from the very beginning to make it work and it showed. It also helps that Google Cloud Platform offers a 400$ trial which applies to their Zync Render Farm. As you might have seen in my earlier posts, I compared Arnold CPU rendering with Redshift GPU rendering in the context of my PC setup. Redshift was definitely faster, but I ended up using Arnold, which I already have a license for. It’s also integrated in Maya in the 2017 version which made it even easier.
I upped myself again. And while that is a reward in and for itself, I’d like to also say I’m happy with how it turned out. I couldn’t say the same last year. There’s definitely room for improvement (Isn’t there always?), but I think it is reaches the target I had set for myself for this demo and I hope it will grab the attention of my peers and public as I’m looking for my place in the animation industry.
Since I found the facial expressions to be stiff this time, I’ll probably focus on that on my next project.
Here is a quick teaser for my 2017 demo.
It allowed me to check for any bugs that could have occurred from having both characters in the same scene, adjust material shaders and the stress on my PC for rendering.
I’m pretty happy with how it came out, I’ll be able to move on to the bigger scene I have in mind.
Having two characters interacting also makes it a lot more fun than when I had to pretend John was holding the camera…
Here is my animation test for my new Noe Rig. That makes it the third iteration for that character. I’m going for a more stylised approach for my next demo and I’m pretty proud of the results so far. I decided to go with the Arnold Renderer. Even though RedShift is quicker I already have a license for Arnold and I’m not ready to drop money on that shift (eh ;p).
The Belefrog enters the fray! I take the opportunity to compare two renderers. Arnold and Redshift. I’m using 2 of their respective custom area lights in combination with their own skin shaders. The textures are the same for both. I set their renders to equivalent number of samples and motion blur.
Red Shift is significantly faster (43%), but the geometry angles are much more obvious and I’d probably have to crank the motion blur to higher settings to get something.
Here is an animation test for my first character that will be featured in my 2017 demo. Johnathan Donothan, nicknamed John Doe. He finally got a promotion from holding the camera.
Let’s start by saying this demo was a step in the right direction. It’s different from my first demo because of the skills I had to learn to make it, but also because it is plausible within the story I’d like to bring to life with my 3D skills.
I’ll be going over the steps I took to make that demo. If it gets too technical for you or you are allergic to walls of text but still want to know where I’m going with all this, just jump to the conclusion. I’ll put it in bold for you.
I modeled my character’s basic shape and proportions in Autodesk Maya, then transferred to Zbrush to refine it. After taking my Facial Rigging Course on CG Workshop with Wade Ryer (http://www.waderyer.com/), I used Topogun, a tool he was fond of, to redo the topology to be more animation friendly. I first made her wetsuit using Marvelous Designer thinking I would use the program to get cloth simulations in there, but I decided to keep that for another demo has it was complex, expensive and not really necessary for something that mostly sticks to the skin.
The Fishpants… There is a reason mark 5 is mentioned by her buddy John. I went through a lot of iterations of those. The very first model was organic like, looking much more like the concept art that can be seen on my website. The thing is… looking good in a drawing doesn’t mean it will work in 3D. I decided to take the advice of my teacher at the time and make it more mechanical. The second model made her look like a shrimp. I honestly don’t remember much about the third and fourth attempt, something about adding propellers and waterjets, making the tail completely obsolete… The fifth one I liked. Making, placing and rigging all those tiny scale like hexagons was driving me mad, but it would definitely have looked good… a year from now. I ended up keeping the hexagon motif for her wetsuit, at that point it was imprinted in my mind. I had to put it somewhere. Mark 6 was straightforward and simple. I was running late on my schedule (Yes. I gave myself deadlines and took my candy away until I reached it. Sweets deprivation is a powerful motivator.) and I wanted something that would work. I am a little sad it didn’t transform though. Mark 7 confirmed.
Unto rigging. In combination with Wade’s course, I also took a Body Rigging Course with Nico Sanghrajka (http://www.n-imagine.com/). Both of these Classes made my rigging skills take leaps ahead. Blindly following a tutorial around to get what you need is one thing. Realizing you actually understand what you’re doing and can work around issues is something else entirely. It felt good, filled me with pride. Mostly everything was done in Maya, except for the facial blend shapes that were done in Zbrush. I have to say however that the sculpting tools in Maya are getting better and better. I do my blend shapes in Maya at work and it works fine as long as poly resolution stay reasonable. I will have to be a better critic of what I actually need for my character next time… dem boob physics… Let’s just say I had to swipe some features on my final rig.
Texturing. To my short shame, although I learned a lot from my Realistic Character Texture Class with Justin Holt (http://justinmichaelholt.com/), not the least of which was MARI, it’s the one that translated the least into actual results on my demo. It suffered from different factors. A good skin texture relies heavily on a good head sculpt, meaning good anatomical structures and displacement maps. From the get go, my character was never meant to have that level of detail. I like the skin color variation I got using the skills I learned. I had a temporary MARI license. As an actual license was out of my budget I had to do every textures within the time I had with it. Which was fine as I didn’t allocate much time to do the textures. I decided at that point to go with the Arnold Renderer for Maya for that demo. That was a bad decision. I’ll get back to that.
Lighting. The action takes places in an interior pool. Which I made the size of a swimming and diving competition pool. Yeah. If you’re grinning right now, you’re right to do so. I used Arnold Area Lights with quadratic decay. When you take the poison you might as well finish the plate.
Animation. I work using videos I shoot of myself for the acting parts and references from the web for the rest. I definitely need more practice with the parts I have to make up. I don’t exactly have fishpants at the ready and I’ll admit her swim isn’t the most glorious. I could have gotten it right with enough time, even more so since I got Bianca Basso to give me some tips (She’s Noe’s Voice, an animator and an all-around artist. You can check her stuff here. http://www.biancabasso.com/). If you’re reading this, that’s me apologizing. Sadly, when I got to that part I was already past the deadline I was hoping to be finished with my demo. I say sadly because animation is to me the most entertaining and rewarding part of 3D. Not only is it the most easily understood part by those not in the industry, but it is simply fun to do. I felt like I had kept the desert for last and ended up not being hungry when I got to it.
Hair. I like hair. It’s probably the next online course I’ll take since I really want them to look good. This time I used Xgen Maya Hair with the Arnold AI Hair texture. It’s getting pretty good results if you crank up the renders high enough. As you can see in an earlier post, I even had the dynamics down. I had to let that go though. It was not working well in my scene with the actual animation. It would look mostly better not moving than jumping around wildly. Since I used dynamics in my first demo I was aware of the time it would cost me to fix it so that it would work properly.
Water. Also known as Bifrost in this case. Bifrost is amazing. There are… just… a few things I should’ve known before getting to that point. First of all, Bifrost do not give a flower about your scene’s scale. It assumes you changed the maya scale from the default cm to m. Deal with it. I did. At that point I couldn’t just scale down everything. I scaled down the pool, made a rough proxy of my character and animated it approximately the same way as I did for the full size character. I made the simulation using that proxy as a collider. It was going to be glorious. Or so I thought. From my earlier comments, you might think I dislike Arnold Renderer. You would be wrong. It is an amazing renderer. You can get feedback for your changes on even complex renders very quickly. The problem is: Bifrost particles and Arnold renderer are not compatible yet. I say yet because Arnold got bought by Autodesk recently and I’m hoping it will be better integrated in the future. Now the solution for that was relatively simple. I converted the particles to a bifrost mesh using the built in feature for it and applied a shader to it to make it look like water. It ended up looking more like lake water than pool water and having black pockets where the mesh didn’t quite fit the walls of the pool but at this point I wasn’t complaining. The disappointment was more the lackluster splash or absence of splash caused by the data lost in the conversion. I was looking forward to that. I basically made that entire scene to get some cool water effects going on and it didn’t. Nobody said CG water was easy. For a first try it wasn’t so bad.
Rendering. We’re there. The final boss. I’m still not sure I won that fight. I’m still alive and the renders are completed, that’s all I can say for sure. Simply put, even with the extra pc I built specifically for rendering, it was too much. The room was too big, the bifrost mesh was killing it, the hair and skin were a lost cause and don’t get me started on the eyebrows. I switched off as many lights as I could and lowered the resolution to what you’ve witnessed. The renders were still 30 minutes a frame. Around 1500 frames. I even tried a renderfarm with a free trial for a while, until I realized it would bleed me dry. If you’re wondering, 15 minutes and a lot of frames at the same time. Definitely an interesting option if you can afford it. Quantity wipes the floor with quality when it comes to rendering power. The renders didn’t look the same as on my pc which led to unwanted variation in the middle of a shot. What is worse about those lengthy renders isn’t the low resolution. I even made a joke on it. It’s that I can’t afford to correct mistakes I only realized with the renders. Like a foot not actually stepping all the way down. A couple of limb jerks that looked fine in playblasts. You know. The usual.
Sound. I recorded my voice for John and Bianca graciously sent me hers for Noe. I edited the conversation in adobe audition. I had originally intended to do the music myself, but as I was running out of time and Martin Saint was interested I offered him the job. You can check him out here (https://www.youtube.com/channel/UC4pwSInr5q9frXk0jiAqFvw). If you’re looking for guitar lessons in Montreal, his door is open. That took a weight off my shoulders.
Composition. Adobe After Effects. Tweaked levels and curves. I made a pretty sweet intro title. Took me 20 minutes. Check out this tutorial (https://www.youtube.com/watch?v=Om00P4hlnhU). I think that expression I keep hearing applies now. I’m salty.
I’m writing this after finally sleeping without a couple of PC running at full capacity 24/7 for weeks next to my bed. Put things into perspective. When I looked at the result of my renders after months of work I was frankly disappointed and frustrated. I was painfully aware of its flaws and couldn’t see it for what it was. Progress. Instead of comparing it to what’s out there, I should have compared it to what I was doing only a year ago. Boy do I trash my past self.
I skimmed over the technical stuff I learned doing this demo, but I also learned other valuable lessons. The bedroom is an awful place to put a rendering pc. Tea can only get you so far. If I ever get serious about bringing my story to life in 3D, I can’t do it on my own. For the love of god, my next demo will be in a room so small, my characters will be cramped in it! Oh. This is going to be gold. Here’s my inspiration back.
I’ll end by saying that this demo was a step in the right direction. It’s just a long way ahead.
Here’s my latest run on Maya Xgen Hair Dynamics. I solved a lot of the issues I had with the last iteration. Next time around I hope to succeed at removing even more of the clipping.
Here’s a Facial Rig Test for the Facial Rig I’ve been working for the last weeks. I’ll be moving on to the whole body rig shortly.
It’s been ages, but I still work on stuff in my spare time. I’m at something like my fifth iteration of the fishpants. To change things up I decided to remodel Noe so my new fishpants idea would be built on something I’m prouder of. Here’s a test render. I need to make a version that’s easier to work with then start building on it. This one is pretty heavy.