Stumbling Toward 'Awesomeness'

A Technical Art Blog

Monday, March 5, 2018

uExport: A Simple Maya Tool for Exporting to UE4


I posted the skeleton of this a while ago in the post “Why does everyone write their own FBX exporter?“, many people asked me for the exporter over the years and I sent it to them. Since that post, uExport has become the way we get all characters into UE4 at Epic. It’s been crazy to see this little fledgling tool, developed largely in spare time to allow us to export non-A.R.T.v1 rigs (like the kite and deer in the first demo I worked on at Epic) become something we rely on so much.

It’s here on GitHub, there’s a lot more info in the readme.

posted by Chris at 9:05 PM  

Tuesday, June 6, 2017

Crysis Technologies

In 2006, the team at Crytek was hard at work trying to come up with ways to ship Crysis –we had definitely bitten off more than we could chew. I found some videos that are now a decade old when cleaning out my HD, but it’s interesting to see –that’s for sure!

Facial Editor

We didn’t have a clue how we would animate all the lines of dialog that were required for the game.  3D Studio Max, which we used to animate at the time had nothing as a means of animating faces, now a decade later, Max and Maya still have zero offering to help with facial rigging or animation.  So.. we decided to write our own. Stephen Bender (Animation Lead) and I worked with Timur Davidenko and Michael Smith (Programmers) on this tool. Marco Corbetta wrote the 2.5D head/facial tracker. Here’s a video:

 

The user would feed into the system a text file, an audio file, and a webcam video of themselves. It would generate the mouth phonemes from the text/audio, and upper 2/3rds of the face from the video. The system would generate this animation on the same interface the animators used to animate so it was easily editable. It shipped with the MS speech DLL, but you could swap that for Annosoft if you licensed it. Crysis shipped with all characters having 98 blendshapes, driven by Facial Editor curves/animation using non-linear expressions. Imagine shipping a game today without having animators touch a face in a DCC app!

SequencePane

click to enlarge

PhotoBump

Many people know that Crytek released the first commercially available normal map generator: PolyBump, but rarely has anyone heard of it’s companion: PhotoBump. This was Created by Marco Corbetta around the same time, but released only to CryEngine licensees in 2005. It was probably one of the first commercial photogrammetry apps, and definitely one of the first uses of photogrammetry in games. Much of the rocky terrain in Crysis was created with the help of  PhotoBump! Marco also stamped/derived high frequency details from the diffuse, which I hadn’t seen others do until sometime after.

SIGGRAPH Best Realtime Graphics 2007

Here’s the SIGGRAPH ET reel from the year we released Crysis. I still can’t believe some of this stuff, like the guy pathfinding across the bridge of constrained boards and pieces of rope! I actually cut and edited this video myself back then, rendering it all out from the engine as well!

posted by Chris at 10:25 PM  

Monday, November 21, 2016

The Eyes

ct_eyes

Windows to the Soul

In CG, the eyes are unforgiving.  For thousands of generations we have had to decipher true intent of other human beings from their eyes, because of this, we have evolved to notice the most minute, 1mm shift in eye shape. If there was one part of a digital character that was one of the most difficult to simulate and render, it would be this. Let’s talk about it.

Talking About the Eyes

anat

First, I am going to go over some eye terminology. There’s a lot out there, but I am only going to go over what is required to give meaningful feedback in dailies. :D  You don’t need to be an ophthalmologist to discuss why a character’s eyes don’t look right.

The Iris – This is the round, colored circle in the eye, it encompasses the pupil, but it is not the pupil.
The Pupil – The black dot, or the hole in the iris.
The Sclera – The ‘whites of your eyes’
The Cornea – This is the bulge over the iris

Vergence – The eyes converge or turn inward to aim at the object a person is gazing at. They diverge, or turn outward when tracking something that is receding, if they diverge more than parallel, you can say the person is ‘walleyed’ (see: strabismus).

KEY TAKEAWAY – When viewing and reviewing eyes, it’s important to notice how the lids break across the iris and the shape of the negative space the lids and the iris create in the sclera. When we look at someone we are mainly seeing this shape. For more advanced readers, I picked the image above because you can really see the characteristics of the wetness meniscus, and eyelash refraction, but we’ll talk about that later.

Eye Placement

Initial eye placement is very important. Eyes too large or placed improperly will not rotate accurately when set in the face. There’s a lot you can learn by just looking at yourself in a mirror, looking at a friend, checking or scouring youtube. There’s even more you can learn from reading forensic facial reconstruction textbooks!

There are a few books for forensic artists, and most have information about placing eyes for facial re-construction. In my previous post about the jaw, I talked about forensic facial reconstruction a bit, these are my go-to books when it comes to eye (and teeth) placement:

books
Face It: A Visual Reference for Multi-ethnic Facial Modeling
Forensic Art and Illustration
Forensic Analysis of the Skull
Facial Geometry

I use a Mary Kay Travel Mirror and have bought them for all riggers/animators on my teams, at six bucks you can’t go wrong.

If you were to draw a line from the upper and lower orbits of the eye socket, it would be in line with the back of the cornea. I don’t usually like to point to skeletal reference, but in this case the orbits are relatively bony parts of the face that can be seen in surface anatomy.

placement01

Here are some good anatomical images for centering the eye in the socket (click to enlarge):

tmp696744_thumb    tg-7-57c-modified    f2

And in practice; here’s eye placement from Marius, the hero Character in Ryse the video game. Notice that the eyeball doesn’t even cover the entire ocular opening when viewed front on in wireframe, as the with anatomical images above

Marius, Ryse 2011, Crytek

Abdenour Bachir, Ryse 2011, Crytek (click to enlarge)

eye_blog

Lastly, here is a GIF I made from a video tutorial called ‘How to Paint the Human Eye‘ by Cat Reyto. It shows eye placement rather well.

So this has all been discussing eye placement in the socket, but what about eye socket placement in the face?  This is where proportions come into play. The books above, “A Visual Reference to Multi-Ethnic Facial Modeling” and “Facial Geometry” are very good at discussing facial proportions and showing you how those proportions can change based on ethnicity. Here is a page from “Facial Geometry” that shows the basic facial proportions, most importantly pupil and eye placement relative to the rest of the face:

face_prop

Here’s a page from the other book, which actually discusses 3D modeling. This is the chapter on eye placement, anyone interested in facial modeling should pick this up, it’s a great full color reference:

book_multiejpg

In my post about the jaw, I discussed jaw placement relative to the eyes and pupils, you can do the inverse: check the eye placement relative to teeth that you feel happy with. Notice that there’s a correlation between the molars or width of the upper teeth and the pupils.

face05face01

Sometimes art direction would like ‘larger’ eyes, this is sometimes attempted by making the eyeball larger, but then it can feel weird when the eye is too large for the socket. This causes issues with the rotations of the eye. For reference, take a look at people who have been handed down a specific neanderthal gene for large eyes (or Axenfeld-Rieger syndrome), like the Ukrainian model Masha Tyelna. She has large eyeballs, but also the facial physiology to accept them:

eyes_Masha_Tyelna4000000110309-masha_tyelna-fiteyes_Masha_Tyelna3

Eye Movement

eye_rot

Range of Movement

Making digital humans, often from fiction, I find that these numbers are all relative, but the book Three Dimensional Rotations of the Eye, has a some good information, as does this chapter of another book: Physiology of the Ocular Movements. From default pose I usually fond that each eye can rotate on the horizontal plane 40 degrees in each direction (right and left) and 30 degrees in each direction for the vertical plane (up and down).

Eye ‘Accommodation’ and Convergence
For the purpose of rigging, the eyes are at maximum divergence (or parallel) at about 1.5m or 6 feet. I have not come across the eye gaze distance that results in maximum divergence, if you happen to have that information; let me know!

Pupils dilate to focus on a near object, this is known as accommodation. A standard young person’s eye can gaze/focus on objects from infinity to 6.5cm from the eyes. In action, eyes cannot converge/diverge faster than about 25 degrees a second.

“Why Does My Character Look Cross-Eyed”?

So, take a look at the MRI at the top of this page. Do you notice that the eyes are not looking forward or parallel? It’s because the eyes aren’t parallel when looking parallel/at an infinite distance. Human eyes bow out a few degrees, the amount of degrees varies between 4 and 6, per eye. The amount ‘off’ an eyeball is bowed outward is called the ‘kappa angle’, check this diagram below:

getimage

Let’s go over some more terminology:

Pupillary Axis – A line drawn straight out the pupil.
Visual Axis – A line from the fovea to the fixation target or the item being gazed upon.
Angle Kappa – the angle between the pupillary and visual axis.

The Hirschberg Test

The way ophthalmologists determine if eyes are converging properly is with the Hirschberg test. It’s a simple test where they ask a child to look at a teddy bear, and they shine a pen light into their eye, they look at his this light source reflects off the cornea, and they can tell if the child has a lazy eye/issue converging his eyes on a point.

hberg

middleeastafrjophthalmol_2015_22_3_265_159691_u6Doing a ‘CG Hirschberg’ test requires a renderer with accurate reflections. You place a point light directly behind the camera and have the rig fixate/gaze directly onto the camera.

For more information on Angle Kappa, as well as data on the average angle across groups of people, check out Pablo Artal’s blog posts.

KEY TAKEAWAY

Even though the eyes bow out a little bit, for all intents and purposes, as you see in the Hirschberg, the pupil still seems like it is looking at the person. So yes, the pupillary axis is off by a few degrees, but with the refraction it’s not too noticeable (<1mm reflection offset from the center of the cornea). An Angle Kappa offset should be built into your rigs, you should not have the pupillary axis converging on the fixation target, this is what causes CG characters to look cross-eyed when viewed with scrutiny.

“Why Don’t My Eyes Feel Real?”

untitled-8

Hanno Hagedorn, Crysis 2005, Crytek

A difficult issue with eyes in computer graphics is making them feel like they are really set in the face.

The eye has a very interesting soft transition where the sclera meets the lids.  Some of this is ambient occlusion and shadows from the brow and lids, and some of it is from reflection of the upper eyelashes.

In 2005, Hanno Hagedorn and I were working on Crysis, we were having an issue with eyes that I called ‘game eye’. It’s where the sclera, has a sharp contrast with the skin.  In 2005, there was no obvious solution to do this in realtime.

We (I still credit Hanno) solved this in a very pragmatic way, the following is from one of our slides at GDC 2007:

overlay1

We created an ‘eye overlay’, a thin film that sat on the eye and deformed with the fleshy eye deformation. A lot of games today, including Paragon, my current project at Epic, use this technique. Many years later, a famous VFX company actually tried to patent the technique.

Here are those same meshes, six years later on Ryse: Son of Rome:

eye_meshes

Wetness Meniscus

One thing added since Crysis is the tearline or wetness meniscus. This is very important, it’s purpose of this is to kick up small spec highlights and fake the area where the lid meets the sclera. You can see this line in photo of the female eye where I have outlined initial anatomical terms.

overlay_wetness_lashes

Eye Rendering

All of these eye parts are not easy to shade correctly, Nicolas Schulz describes how Crytek implemented a forward pass in their deferred renderer to deal with eyes in his 2014 paper The Rendering Technology of Ryse. Nicholas has two slides dedicated to eye shading:

eye_rendering

The eye shader used on Ryse is documented in depth here [docs.cryengine.com], the modified HLSL CFX shader code is here [github].   Some important features of the shader include:

  • Cornea refraction and scattering – this is important, it simulates the refraction of the liquid in the cornea
  • Iris color, depth, self shadowing, and SSS – Since there is no physical iris, we need to fake that there is a physical form there using a displacement map (POM)
  • Eye occlusion overlay depth bias – this is very important, it allows you to push/pull the depth of the overlay film, so that the eye doesn’t penetrate through it
  • Sclera SSS – not to be overlooked, or else the eye looks like a golfball.

eye_ball_textures

These were the images that fed into the shader features above, and below are the overlay spec and AO masks, and the spec texture for the wetness/tearline.

eye_occlusioneye_water_spec

This is an example of the iris displacement map ingame seen with a debug render view (Xbox One, 2011)

Abdenour Bachir, Ryse, Crytek 2011

Does the eye really need a corneal bulge?

On Ryse and Crysis, we had spherical eyes, and faked the corneal bulge with a shader and fleshy eye deformation. A corneal bulge means that you really have to have your shit together when it comes to the eye deformation. All those layers mentioned above have to deform with the bulge and it can be a lot to manage across all the deformation contexts of the eye (blink directionals, squint directionals, etc) On Paragon, we have non-spherical eyes, and it’s very challenging to deform the eyes properly in the context of a 60hz e-sports title with only one joint per lid. It’s actually pretty much impossible.

Altogether

Below, you see the eyes in Ryse: Son of Rome, they feel set in the face, and consistent with the world and hyper-real style of the game. (in-game mesh and rig, click to enlarge)

Abdenour Bachir, Ryse 2012, Crytek

Abdenour Bachir, Ryse 2012, Crytek

Scanning/Acquiring Eyes

capture

When scanning a character’s head it’s important to get their eyes fixed at a gaze distance you have recorded. I also use the scan to see how the iris breaks across the lid and infer information about how the eye will be set in the face and skull.

  • If you want to take your own eye texture reference, shoot through a ring or attach a light co-axial with the camera lens, try to get the highlight in the pupil as you will discard this part of the image anyway.
  • Because the eye is shiny and it’s characteristics change as you move around it, photogrammetry often falls flat.
  • Because the eye is a complex translucent lens, cutting spec with cross-polarization often also falls flat. Light loses it’s polarization when bouncing around in there.

When it comes to scanning the eyes themselves, Disney Research has published an interesting paper on the High Quality Capture of Eyes.

How You Review Eyes

First off you should check eyeball depth and placement, as shown above.

  • You should be reviewing eyes with at least the FOV of a portrait lens: 80mm or a 25 degree FOV. When getting ‘all up in there’ I often use a 10 degree FOV.
  • Try to review them at the distance you will see them in your shipping product.
  • If you are embodying the person that is the fixation point of the digital human, you really need runtime look IK (for the eyes, but preferably feathered torso>head>neck>eyes). From some distance, you can tell is someone is looking at your eyes or your ear. Think about that. The slightest anim compression or issue in any joint from root to eyes can cause the gaze to be off a few degrees and that’s all it takes.
  • If you’re doing a lot of work, build a debug view into your software that draws the pupillary axis and visual axis, all the way to the fixation/gaze point
posted by Chris at 2:01 AM  

Friday, September 2, 2016

SIGGRAPH Realtime Live Demo Stream

The stream of our SIGGRAPH Realtime Live demo is up on teh internets. If you haven’t seen the actual live demo, check it out!

I feels amazing to win the award for best Realtime Graphics amongst such industry giants. There are so many companies from so many industries participating now, and the event has grown such much. Feels really humbling to be honored with this for a third year; no pressure!

posted by Chris at 9:26 AM  

Thursday, October 23, 2014

Destiny Rigging/Animation Slides and Videos Up

destiny

The Destiny talks at SIGGRAPH were really interesting, the material just went up and you should definitely check it out:

http://advances.realtimerendering.com/destiny/siggraph2014/animation/index.html

posted by Chris at 11:37 AM  

Tuesday, August 26, 2014

Multi-Resolution Facial Rigging

At SIGGRAPH we discussed a bit about our facial pipeline that we haven’t talked about before. Namely, facial LODs and multi-platform facial rigging.

I would like to start by saying that we spent a _LOT_ of time thinking about facial levels of detail on Ryse, and put a lot of effort into the area. I know this is a long post, but it’s an important one.

run_on_brian

Lowest Common Denominator

As the ‘next-generation’ seems to be largely defined by mult-platform titles it seems valuable to focus on ways to increase fidelity on next generation hardware while still being able to target older hardware specs. That said, I have yet to see a pipeline to do this. Most next gen games have skeletons and animations limited by the lowest common denominator of the last generation, often Playstation 3.

When you wonder why your awesome next gen game doesn’t seem to have character models and animation like next-gen only titles, this is why.

It’s very easy to increase texture resolution by having a pipeline where you author high and bake to lower targets.  It’s more complicated to author meshes high and publish to lower targets, we did this on Crysis 1 and 2, High end PC saw higher mesh resolution characters than Xbox 360. I would say it’s the hardest to make rigs, deformers, and animations for a high spec hardware target and create a process to publish lower fidelity versions. No one wants to have different character skeletons on each hardware platform.

facial_complexity

You Deserve an Explanation

When we released the specs of our faces, people understandably were a bit taken aback.  Why on earth would you need 250 blendshapes if you have 260 joints? This above is actually a slide from our asset creation SIGGRAPH course that resonated very well with the audience.

Let’s take a look at some goals:

  1. Cut-scene fidelity in gameplay at any time- no cut-scene rigs
  2. Up to 70 characters on screen
  3. Able to run on multiple hardware specs

The only way to achieve the first two is through a very aggressive and granular level of detail (LOD) scheme. Once having that LOD system in place, the third item will come for free, as it did on our previous titles. However, as we have LODed meshes and materials, we had never LODed rigs.

On a feature film, we wouldn’t use joints, we would have a largely blendshape-only face.

But this doesn’t LOD well, we need to be able to strip out facial complexity in the distance and on other platforms.

Facial Level of Detail

So to achieve these goals, we must aggressively LOD our character faces.

Let’s generate some new goals:

  • Improve LOD system to allow the swapping or culling of skinned meshes per-mesh, each at hand-tailored distances per-character instance
  • Not only swap meshes, but skinning algorithms, materials, cull blendshapes, etc..
  • One skeleton – all levels of detail stored in one nested hierarchy, disable/reveal joints at different LOD levels, as I mention above, no one wants multiple skeletons
  • One animation set – drives all layers of detail because it’s one hierarchy, only the enabled joints receive animation
  • All facial animations shareable between characters
  • Faces snapped onto bodies at runtime – “Cry parent constraint” of sorts snaps head, neck, spine4, clavs, and upper arms of facial rig to body, allowing dynamic LODing of face irrespective of body.

LOD_hierarchy

One Hierarchy to Rule them All

Before going into the meshes, skinning algorithms, culling, etc.. it’s important to understand the hierarchy of the face. At any given mesh LOD level, there are many joints that are not skinned. Above you see three layers of joints, 9 LOD0, 3 LOD1, and 1 LOD2.

To use a single hierarchy, but have it drive meshes at different levels, you need to accomplish the following:

  • Make sure you have three layers that can drive different facial LODs, we had something like 260/70/15 on Ryse.
  • Each layer must be driven, and able to deform that LOD alone. Meaning when creating rig logic, you must start from the highest LOD and move down the chain. The LOD0 joints above would only be responsible for skinning the details of the face at LOD0, their gross movement comes from their parent, LOD1.

Here you can see the Marius example video from our slides. Notice the ORANGE joints are responsible for gross movement and the YELLOW or GREEN leaf joints just add detail.

jaw_drop_skel

 

Why blendshapes? Isn’t 260 joints enough?

The facial hierarchy and rig is consistent between all characters. The rig logic that drives those joints is changed and tweaked, the skinning is tweaked, but no two faces are identical. the blendshapes serve two main purposes:

1) Get the joint rig back on scan. Whatever the delta from the joint rig to the scan data that is associated with that solved pose from the headcam data, bridge that. This means fat around Nero’s neck, bags under his eyes, his eyebrow region, etc.

2) Add volume where it’s lost due to joint skinning. Areas like the lips, the cheeks, rig functions like lips together, sticky lips, etc, require blendshapes.

nero_corectives

Look at the image above, there just aren’t enough joints in the brow to catch that micro-expression on Nero’s face. It comes through with the blendshape, and he goes from looking like you kicked his dog, to his accusatory surprise when he figures out that you are Damoclese.

A Look Under the Hood: Ryse Facial LODing

Thanks to the hard work of graphics engineer Jerome Charles we were able to granularly LOD our faces. These values are from your buddy Vitallion, who was a hero and could be a bit less aggressive. Many of the barbarians you fight en masse blew through all their blendshapes in 2m not 4m.

Assets / Technologies (LOD)
Distance
CPU skinning, 8 inf, 260 joints, 230 blendshapes, tangent update, 5k  tris across multiple meshes 0-4m
CPU skinning, 8 inf, 260 joints, 3-5k across multiple meshes with small face parts culled 4-7m
GPU skinning, 4 inf, 70 joints, 2k mesh with integrated eyes 7-10m
GPU skinning , 4 inf, <10 joints, <1k mesh 10m+

 

Here’s a different table showing the face mesh parts that we culled and when:

Distance Face parts
4m Eyebrow meshes replaced, baked into facial texture
3m Eyelash geometry culled
3m Eye AO ‘overlay’ layer culled
4m Eye balls removed, replaced with baked in eyes in head mesh
2m Eye ‘water’ miniscus culled
3m Eye tearduct culled
3m Teeth swapped for built-in mesh
3m Tongue swapped for built-in mesh

Why isn’t this standard?

Because it’s very difficult and very complicated, there aren’t so many people in that can pull something like this off. On Ryse we partnered with my friend Vlad at 3Lateral, after 4 months working on the initial Marius facial prototype, he and his team were able to deliver 23 more facial rigs at the same fidelity in just under three months!

But also, there’s the whole discussion about whether the time and effort spent on that last 5% really pays off in the end. Why not just use PS3 facial rigs on all platforms and spend a little more on marketing? It’s happening! And those guys probably aren’t going bankrupt any time soon..  ¬.¬

I am insanely proud of what the team accomplished on Ryse. Facial rigging is nothing without a great bunch of artists, programmers, animators, etc. Here’s some good moments where the performances really come through, these are all the in-game meshes and rigs:

DISCLAIMER: All of the info above and more is publicly available in our SIGGRAPH 2014 course notes.

posted by Chris at 4:40 AM  

Tuesday, October 4, 2011

Question: Rigging with MetaData?

As many of you know, I feel the whole ‘autorigging’ schtick is a bit overrated. Though Bungie gave a great talk at GDC09 (Modular Procedural Rigging), Dice was to give one this year at SIGGRAPH (Modular Rigging in Battlefield 3), but never showed up for the talk.

At Crytek we are switching our animation dept from 3dsMax to Maya. This forces us to build a pipeline there from scratch; in 3dsMax we had 7 years of script development focused on animation and rigging tools. So I am looking at quite a bit o Maya work. The past two weeks focusing on a ‘rigging system’ that I guess could be thought of as ‘procedural’ but is not really an ‘autorigger’. My past experience was always regenerating rigs with mel cmds.

Things I would like to solve:

  • Use one set of animator tools for many rigs – common interfaces, rig block encapsulation (oh god i said ‘block’)
  • Abstract things away, thinking of rigging ‘units’ and character ‘parts’ instead of individual rig elements, break reliance on naming, version out different parts
  • Be fluid enough to regenerate the ‘rigging’ at any time

First Weekend: Skeleton ‘Tagging’

I created a wrapper around the common rigging tools that I used, this way, when I rigged, it would automagically markup the skeleton/elements as I went. This looked like so:

The foundation of this was marking up the skeleton or cons with message nodes that pointed to things or held metadata.  This was cool, and I still like how simple it was, however, it didn’t really create the layer of abstraction I was after. There wasn’t the idea of a limb that I could tell to switch from FK to IK.

Second Weekend: Custom Nodes

That Bungie talk got a lot of us all excited, Roman went and created a really cool custom node plugin that does way more than we spec’d it out to do. I rewrote the rigging tools to create ‘rigPart’ nodes, which could be like an IK chain, set of twist joints, expression, or constraint. These together could form a ‘charPart’ like an arm or leg. All these nodes sat under a main ‘character’ node. I realize that many companies abstract their characters into ‘blocks’ or ‘parts’, but I had never seen a system that had another layer underneath that. Roman also whipped up a way that when an attr on a customNode changes, you could evaluate a script. So whether it’s a human arm or alien tentacle arm, the ‘charPart’ node can have one FK/IK enum. I am still not sure if this is a better idea, because of the sheer legwork involved..

Third Weekend: A Mix of Both?

So a class like ‘charParts.gruntLeg()’ not only knew how to build the leg rigParts, but also only the leg ‘rigging’ if needed. This works pretty well, but the above was pretty hard to read. I took some of my favorite things about the tree-view-based system and I created a ‘character’ outliner of sorts. This made it much easier to visualize the rigParts that made up individual ‘systems’ of the character, like leg, spine, arm, etc. I did it as a test, but in a way that I easily swap it out with the treeWidget in the rigging tools dialog.

So how do you guys solve some of these issues?

posted by Chris at 4:00 AM  

Wednesday, March 4, 2009

Common Character Issues: Attachments

I love this picture. It illustrates a few large problems with video games. One of which I have wanted to talk about for a while: Attachments of course. I am talking about the sword (yes, there is a sword, follow her arm to the left side of the image..)

Attaching props to a character that has to dynamically be seen from every angle through thousands of animations can be difficult. So difficult that people often give up. This was a promotional image for an upcoming Soul Calibur title, this goes to show you how difficult the issue is. Or maybe no one even noticed she was holding a sword. So let’s look at a promotional image from another game:

Why does it happen?

Well, props are often interchangeable. Many different props are supposed to attach to the same location throughout the game. This is generally done by marking up the prop and the skeleton with two attachment points that snap to one another.

In this case you often have one guy modeling the prop, one guy placing the skeleton, and one guy creating the animation. All these people have to work together.

How can we avoid these problems?

This problem is most noticeable at the end of the line: you would really only see it in the game. But this is one of the few times you will hear me say that checking it ‘in the engine’ is a bad idea. It’s hard enough to get animators to check their animation, much less test all the props in a ‘prop test level’ of sorts.

I feel problems like this mainly show up in magazines and final games because you are leaving it up to QA and other people who don’t know what to look for. There was a saying I developed while at Crytek when trying to impart some things on new tech art hires: “Does QA know what your alien should deform like? And should they?” The answer is no, and it also goes for the things above. Who knows how robotnik grips his bow.. you do, the guy rigging the character.

So in this case I am all for systems that allow animators to instantly load any common weapons and props from the game directly onto the character in the DCC app. You need a system that allows animators to be able to attach any commonly used prop at any time during any animation (especially movement anims)

Order of operations

Generally I would say:

  1. The animator picks a pivot point on the character. They will be animating/pivoting around this.
  2. The tech artist ‘marks’ up the skeleton with the appropriate offset transform.
  3. The modeler ‘marks’ his prop and tests it (iteratively) on one character
  4. The tech artist adds the marked up prop (or low res version) to a special file that is streamlined for automagically merging in items. Then adds a UI element that allows the animator to select the prop from a drop down and see it imported and attached to the character.

Complications

I can remember many heated discussions about problems like this. The more people that really care about hte final product, and the more detailed or realistic games and characters get, the more things like this will be scrutinized.

This is more of a simple problem that just takes care and diligence, whereas things like multiple hand positions and hand poses are a little more difficult. Or attachments that attach via a physics constraint in the engine. There are also other, much more difficult issues in this realm, like exact positioning of AI characters for interacting with each other and the environment, which is another tough ‘snap me into the right place’ problem dealing with marking up a character and an item in the world to interact with.

posted by Chris at 11:25 AM  

Monday, March 2, 2009

Make a 3D Game for the Right Reasons! (My SF4 post)

I ran out and got Street Fighter 4 (SF4) just like everyone else. Street Fighter was ‘the game’ you had to beat all the kids in the neighborhood at for an entire generation (sadly replaced by PES), and I have very fond memories of playing it.

SF4 is the first 3D game in the series created by Capcom, in the past, Street Fighter EX was developed by Arika, a company formed by one of the creators of the original game as well as many other Capcom employees. Even though porting the franchise to 3D was largely considered a complete and utter failure, they decided to give it another go, this time ‘officially’.

Strengths and Weaknesses

As artistic mediums, 2D and 3D are very different. 3D art is perspective correct, it is clean, sterile and perfect. It is much simpler to do rotations and transformations of rigid objects in 3D, this is why Disney started doing vehicles as cel shaded 3D objects in their later films. However, it is very difficult to add character to 3D geometry. As an example, think of Cruella Deville’s car from 101 Dalmatians, it has character. (When it’s not overly-rotoscoped from a real life model)

2D lends itself to organic shapes, which can be gestural, and are ‘rendered’ by a human being; so they’re never perfect. 3D is great for vehicles and space ships, anything that generally lacks character. 3D is also the only way you are going to get a photo-real gaming experience. For instance; when we were making Crysis, we knew this was the only way, there was never a question of which medium to use.

When I go on my ‘2D/3D rant’, I usually hearken back to something I love, so let’s take a look at the transition of an older game from 2D to 3D: the Monkey Island Series.

Many years ago developers felt that in order to compete, they had to ship games with the latest 3D technology. This is really unfortunate, and leads to them choosing to sometimes develop an ‘ok’ 3D game over a ‘beautiful’ 2D game. I believe in Curse of Monkey Island (last 2D title in the series (so far)), in the options menu there was an option to “Enable 3D acceleration”, upon clicking it, the words “We were only kidding” and other phrases pop up next to the radio button. The developers were already feeling the pressure to release a 3D game.

2D games are still profitable, just look at Xbox Live, where 2D games like Castle Crashers have been some of the top selling titles this year.

Lastly, lets not forget that 3D games are actually cheaper, or have been, historically. However, maybe not with some current-gen titles; where garbage cans have 4k texture maps and take two weeks to sculpt in Z Brush. But animation is definitely easier than it ever was. Of course the other side of that argument is that you can now have 6k animations in a game.

Street Fighter 4 Is A Three Dimensional ‘2D’ Game

Before going on, it’s important to note that in SF4, the characters still move on a 2D plane as they always have. It’s actually nearly identical to all the other games in the series as far as design.

As always, you are pitting your guy up against someone else, and both of your characters are the focal point, they are the only interactive things in a game which centers around them. This is a series that has always been about character, and has always been 2D with great hand drawn art. Remember: Capcom offered fans a 3D game and they did not want it.

So, SF4 is a game that takes place in 2D space and focuses on only two characters at any given time. This is great news, it means you can really focus on the characters, moreso than almost any other game genre.

The Constraints of a 3D Character Art Style

3D characters are driven by ‘joints’ or ‘bones’. Each joint has some 3D points rigidly ‘glued’ to it, because of this, 3D characters, especially in games, look rigid; like action figures. In my opinion SF4 characters feel like lifeless marionettes. In a 2D game, you can quickly and easily draw any form you want. The more you want to alter the ‘form’ a 3D character, the more joints it takes, and the more complex the ‘rig’ that drives the character. Also, on consoles, the number of joints you can use are limited. This is easily distinguished when comparing 2D and 3D art:

Notice how the 3D characters look lifeless? They don’t have to, it’s just more difficult. Whereas before, adding a cool facial expression meant simply drawing it by hand. Now it means sculpting a 3D shape: by hand. It’s tedious and difficult. Also, notice how in 3D Chun Li’s cloth is ‘clipping’ into her leg, or Cammy’s wrist guard is ‘clipping’ into her bicep. 3D is much more difficult to get right, because you are messing with sculptures, not drawings. You could also say the foreshortening on Chun Li’s arm in 2D looks weird; there are trade-offs, but in a 2D pipeline is also much easier to alter character proportions and fix things.

There are entire web pages dedicated to the weird faces of SF4 characters. It seems one of the easiest ways to make a character look in ‘pain’ was to translate the eyeballs out of the head: it looks ridiculous when compared to the hand-drawn hit reactions:

Whereas before you had one guy drawing pictures of a character in motion (maybe someone to color), now it takes a team to do the same job. You often have a modeler, technical artist, and animator, then hopefully a graphics engineer for rendering. That’s a lot of people to do something one person used to handle, and it introduces not only beaurocracy, but a complicated set of choreographed events that culminate in the final product.

This is a Capcom press image of Chun Li and it highlights my point exactly. It is harder and much more complicated to sculpt a form than draw it. Not to mention sculpt it over time, using complicated mathematical tools to manipulate geometry. However, it’s not an impossible task, and to think that this is ‘ok’ enough to release as a press image for an upcoming AAA game is crazy.

It’s not just deformation and posing, but animation in general. There is a lot of laziness associated with 3D animation. Let me be more precise: it is easier to ‘create’ animation because the computer interpolates between poses for you. As an animator, you have to work much harder to not fall into this ‘gap’ of letting the machine do the work for you. Playing SF4 you will see sweeps, hurricane kicks, and various other animations that just rotate the entire character as if on a pin. They also share and recycle the same animations on different characters, this was not possible in 2D.

One thing I find interesting is that, though the new game is 3D, it really has no motionblur. The 2D games had Chuck-Jonesesque motionblur drawn into the frames to add a quickness and ‘snap’, but it also adds an organic quality that is lacking in SF4.

EDIT: Having now logged a lot more time playing, there is indeed a weird kind of motion blur, it’s barely noticeable at all and looks almost hand painted/added.

Another odd thing, I can spot mocap when I see it, and I think the technique was used on some of the background characters, like the children playing under the bridge. The motion is so stellar that it puts the main characters to shame. That’s kind of sad. Though, all new characters introduced on the console seem to have much better animation, so maybe this is something Capcom have worked on more.

So Why Make A 3D Street Fighter?

If you aren’t going to make a game where characters can move through 3D space (no Z depth), why use a 3D art style, especially when it is harder to create expressive characters?

I will offer some reasons to ‘reboot’ the Street Fighter franchise as a 3D fighter:

  • Finally use collision detection to not have characters clip into one another as they always have
  • Use physics to blend ragdoll into hit reactions, also for hit detection and impulse generation; maybe allow a punch to actually connect with the opponent (gasp)
  • Use jiggly bones for something other than breasts/fat, things like muscles and flesh to add a sense of weight
  • Employ a cloth solver, c’mon this is a character showcase; if NBA games can solve cloth for all on court characters, you can surely do nice cloth for two.
  • Markup the skeletons to allow for ‘grab points’ so that throw hand positions vary on char size and are unique
  • Attach proxies to the feet and have them interact with trash/grass on the ground in levels
  • Use IK in a meaningful way to always look at your opponent, dynamically grab him in mid animation, always keep feet on slightly uneven ground, or hit diff size opponents (or parameterize the anims to do these)
  • Play different animations on different body parts at different times, you are not locked into the same full body on a frame like 2D
  • For instance: se ‘offset animations’ blended into the main animation set to dynamically create the health of the character, or heck, change the facial animation to make them look more tired/hurt.
  • Shaders! In 3D you can use many complex shaders, to render ‘photorealistic or non-photorealistic images (like cartoons)
  • You can also write shaders to do things like calculate/add motionblur!

Unfortunately: Capcom did none of these. Sure, a few of the above would have been somewhat revolutionary for the franchise, but as it stands, 3D characters add nothing to SF4, I believe they actually degrade the quality of the visuals.

EDIT: After playing more I have noticed that they are using IK (look IK) on just the head bone, shorter characters look up when a large character jumps in front of them.

posted by Chris at 12:15 PM  

Saturday, December 13, 2008

Quantic Dreams

This is what it looks like on the other side of the uncanny valley.

No longer working for Crytek, maybe I can comment on some industry related things without worrying that my opinions could be misconstrewn as those of my former employer.

EuroGamer visited Quantic Dream this week, the studio working on the game ‘Heavy Rain’, who’s founder, de Fondaumière, arrogantly proclaimed that there was ‘no longer an uncanny valley‘, and that there are ‘very, very few‘ real artists in the video game industry. (A real class act, no?)

So their article starts with “We can’t tell you how Heavy Rain looks, sounds or plays…”, which I find kind of ridiculous seeing as how the studio’s only real claim to fame right now is the hype of it’s co-founder who casually claims they have accomplished one of the most amazing visual feats in the history of computer graphics (in real-time no less!).

Across the world there are thousands of outstanding artists chasing this same dream, from Final Fantasy, to Polar Express and Beowulf; people have tried to cross the ‘uncanny valley’ for years, and are getting closer every day. At Christmas you will be treated to what is probably one of the closest attempts yet. (Digital Domain’s work in Benjamin Button)

Not really having any videos to back up the hyperbole, they gave the EuroGamer staff a laundry list of statistics about their production.

I have yet to see anything stunning to back up the talk, 8 months after making his statement about crossing the uncanny valley, they released this video, which was just not even close, to be frank.

It looks like they aren’t using performance capture. Without markers on the face this means they have to solve the facial animation from elsewhere, usually a seated actress who pretends to be saying lines that were said in the other full body capture session. There’s a reason why studios like Imageworks don’t do this, it’s hard to sync the two performances together. If they have accomplished what other’s have not, with much less hardware/technology, it means they have some of the best artists/animators out there, and I say hats off to them.

But with every image they do release, and every arrogant statement, it is digging the hole deeper. The sad thing is they could release of of the greatest interactive experiences yet, but their main claim is the most realistic cg humans yet to be seen, and if they fail at this, it will overshadow everything.

At least they know how their fellow ps3 devs over at Guerilla must have been feeling for a few years now.

posted by Chris at 6:53 AM  

Wednesday, September 17, 2008

Making of the Image Metrics ‘Emily’ Tech Demo

I have seen some of the other material in the SIGGRAPH Image Metrics presskit posted online [Emily Tech Demo] [‘How To’ video], but not the video that shows the making of the Emily tech demo. So here’s that as well:

At the end, there’s a quote from Peter Plantec about how Image Metrics has finally ‘crossed the uncanny valley’, but seriously, am I the only one who thinks the shading is a bit off, and besides that, what’s the point of laying a duplicate of face directly on top of one in a video? Shouldn’t they have shown her talking in a different setting? Maybe showed how they can remap the animation to a different face? There is no reason not to just use the original plate in this example.

posted by Chris at 4:44 PM  

Thursday, July 31, 2008

MGS4 Character Pipeline

Character Creation Pipeline

Thanks to my brother, Mike, for translating  this from the original japanese [here]

The hero, Snake, and nearly all other characters we animate on the PS3 and make an appearance in the game are restrained to the range of about 5 thousand to 1 million polygons (including the face). Also, in both gameplay and “cutscenes” the same resolution polygon characters are used. This allows for seamless transition between the gameplay and cutscenes and makes it easier for the player to get emotionally involved in the reality of the game.

Furthermore, for all other characters, except crowds, the same resolution of polygon characters are used in game as well as in cutscenes. Separate from the resolution models used on the PS3, high rez data is modeled at the same time to generate a normal map. Wrinkles in clothing and other details are expressed through this normal map, created from the high rez model.

Of all the bones within the character’s body, the number that contain and are driven by animation data is roughly around 21. But, in reality a number of helper (auxiliary) bones are used to supplement motions like twisting in the knees, elbows, arms and legs.  These however are not driven by animation data. Instead, they reference values of the basic animation driven joints and move in like manner.


The same method is employed on the PS3, not just in XSI; all you have to do is extract the helper bones’ definition files from the XSI data and you can achieve the same kind of control on the PS3 as well. (Awesome! Rig syncing constraints and driven bones between DCC app and game engine)
Since there is no actual motion data stored inside the driven-bones, you are able to not only limit the data volume but even in the event that you need to add or delete helper bones, there’s no need to reconvert the motion data- you can just adjust the model data instead.

posted by Chris at 8:27 AM  

Thursday, July 31, 2008

MGS4 Facial Animation

Shockingly Realistic Facial Animation

Thanks to my brother, Mike, for translating  this from the original japanese [here]

One of the most notable things about MGS4 is its world-leading cutting edge facial animation. Exactly how were these real-to-life facial expressions created?

Since the Metal Gear Solid series is lip-sinked for localization, from a workload standpoint voice analysis software is employed

In MGS4 for example, lip-sinking for Japanese and English were done seperately with different voice analysis software.Other emotions and expressions besides lip-sinking were animated by hand. In nearly all cases, the expression and phoneme elements were worked on together simultaneously, reducing interference and allowing MGS4 to achieve its simultanious world release.

When doing voice analysis, it’s necessary to set parameters for both expression components (i.e., anger, smile, etc.) and phoneme components (all languages) seperately. After setting this up, we need to see how it behaves as a rig. It’s possible to use parameters for the rotation and movement of bones; however, the rig can become more complicated and it can also become more difficult to predict how the bones with transform/change once enveloped. In other words, when facial animation is done by only controlling the bones, ituitively the designer’s job becomes more difficult and he runs into the following two problems: 1) expressing the behavior of bones, and 2) setting parameters for phonemes.

However, with shape animation (even though it has the drawback of linear interpolation) it’s extremely easy to set up parameters for all your phonemes and
expressions. Most of all, it’s adventagous in that the desiger will be able to intuitively predict the result.

For these reasons, this time on our rig we used bone-driven animation based on the results of various parameter shapes.

With this set up, using voice analysis automated animation (not just the mouth, but automatic animation of the tounge and throat phonemes as well) and hand animation for emotions, we are able to achieve an abundance of realistic expressions.

In the following flash movie you can see how smooth muscular expressions are achieved through superb rig setup

Flash Movie:

Facial rig setup pipeline

————————————————
1. Lo-poly model driven by shape animation
2. Above that, the constrained bones
3. Polygonal mesh enveloped to the bones
4.Tangent color
5. OpenGL display (wrinkles expressed also with normal map)

————————————————

Expressions, phonemes, eyes (eyebrows), and shader driven wrinkle animation are all tab selectable.
Through the combination of various parameters we can create life-like expressions like those shown above.

The most suprising thing is, we developed a tool that automatically sets up this facial rig that allows such sophisticated control. In other words, if you enter the facial model data and run the tool it will automatically identify the optimal position for bones– in this system the tool will create controls that include the preset parameters for emotions. (a smily face, an angry face, etc.) To perform the automated facial rigging, the facial data’s topology information needs to be standardized ahead of time. If you adhere to this one rule, your set up can be done automatically, and all that’s left to do is for the designer to fine-tune the controls and you have a constructed enviorment where you can get right into your facial animation.

Next, a rig that controls the movement of the eyeball and surrounding muscles can also be generated automatically using this tool. Since the area around the eye, like the area surrounding the mouth, is controlled by the simultanious usage of shapes and bones, when you move the eyeball locater you get smooth muscular movement. What’s more, even if you edit the shape, or redefine the configuration of the outline of the eye, it doesn’t disrupt the expression of brow wrinkles or the blinking of the eye in any way.

Behind all the characters that make an appearance in this game, and appeal to the player’s emotions, we have implemented this set up and animation system; and, through it we are able to increase and maintain a high quality user experience.

posted by Chris at 1:14 AM  

Friday, July 11, 2008

Simple Perforce Animation Browser/Loader for MotionBuilder

This is a simple proof-of-concept showing how to implement a perforce animation browser via python for MotionBuilder. Clicking an FBX animation syncs it and loads it.

The script can be found here: [p4ui.py], it requires the [wx] and [p4] libraries.

Clicking directories goes down into them, clicking fbx files syncs them and loads them in MotionBuilder. This is just a test, the ‘[..]’ doesn’t even go up directories. Opening an animation does not check it out, there is good documentation for the p4 python lib, you can start there; it’s pretty straight forward and easy: sure beats screen scraping p4 terminal stuff.

You will see the following, you should replace this with the p4 location of your animations, this will act as the starting directory.

	path1 = 'PUT YOUR PERFORCE ANIMATION PATH HERE (EXAMPLE: //DEPOT/ANIMATION)'
	info = p4i.run("info")
	print info[0]['clientRoot']

That should about do it, there are plenty of P4 tutorials out there, my code is pretty straight forward. The only problem was where I instanced it, be sure to instance it with something other than ‘p4’, I did this and it did not work, using ‘p4i’ it did without incident:

p4i = P4.P4()
p4i.connect()
posted by Chris at 6:45 PM  

Monday, June 23, 2008

Under the Hood: The Inner Workings of Animation on Assassin’s Creed

Under the Hood: The Inner Workings of Animation on Assassin’s Creed

Sylvain Bernard, Animation Director, Ubisoft

Animation:

  • All animation was done in 3dsMax with Biped
    • ‘Our animators do not like MotionBuilder for creating animation’
    • Would have meant porting all their tools to MotionBuilder
  • MotionBuilder was only used to clean mocap
  • They decided to ignore foot sliding in order to concentrate on a better performance and gameplay experience
  • They stressed the importance of Technical Animators
  • Up to 15 animators worked on Assassin’s Creed
  • 40% of all animation was hand keyed
  • There is no procedural animation(not counting blending)
  • They showed the entire move tree
    • sprint, run, walk, jog, slow walk, banking, strafe, 4 idles
    • 168 ground animations for altair locomotion group
    • 122 anims in climbing group

Production:

  • 90% of work was integrating animation into the environment
  • The key was pairing animators with programmers
    • Sit them together
  • Before they started one main goal of the project was ‘to do as much animation as we could’
    • They saw Next Gen as an animation showcase
  • They prototype gameplay in max to show programmers how the game should look/feel
    • How AI should react
    • How a character should interact with the environment
  • ‘In the beginning designers were given free reign to make anything they wanted, in the end we had to make a 20 page document telling them how to create levels’
    • Too much freedom leads to chaos
  • Stressed the need to involve animators in animation system development

Pipeline/Rigging:

  • All characters share the same skeleton (male/female npc, altair)
    • ‘the art director wanted characters of different heights, we said ‘no”
    • made mocking things up easy
  • They call their movement locator the ‘magic bug’
    • Locators ‘joined together’ when two characters interacted
  • NPCs use simple hinge constraints for ponytails and things
  • They had ‘no working AI for almost the first two years‘ of the project
  • They do edge detection on the collision mesh
  • Auto nav mesh generation
  • Auto ‘animation object’ placement
posted by Chris at 12:34 PM  

Saturday, June 21, 2008

Facial Stabilization in MotionBuilder using Python

Facial motion capture stabilization is basically where you isolate the movement of the face from the movement of the head. This sounds pretty simple, but it is actually a really difficult problem. In this post I will talk about the general process and give you an example facial stabilization python script.

Disclaimer: The script I have written here is loosely adapted from a MEL script in the book Mocap for Artists, and not something proprietary to Crytek. This is a great book for people of all experience levels, and has a chapter dedicated to facial mocap. Lastly, this script is not padded out or optimized.

To follow this you will need some facial mocap data, there is some freely downloadable here at www.mocap.lt. Grab the FBX file.

andy serkis - weta head stabilization halo

Stabilization markers

Get at least 3 markers on the actor that do not move when they move their face. These are called ’stabilization markers’ (STAB markers). You will use these markers to create a coordinate space for the head, so it is important that they not move. STAB markers are commonly found on the left and right temple, and nose bridge. Using a headband and creating virtual markers from multiple solid left/right markers works even better. Headbands move, it’s good to keep this in mind, above you see a special headrig used on Kong to create stable markers.

It is a good idea to write some tools to help you out here. At work I have written tools to parse a performance and tell me the most stable markers at any given time, if you have this data, you can also blend between them.

Load up the facial mocap file you have downloaded, it should look something like this:

In the data we have, you can delete the root, the headband markers, as well as 1-RTMPL, 1-LTMPL, and 1-MNOSE could all be considered STAB markers.

General Pipeline

As you can see, mocap data is just a bunch of translating points. So what we want to do is create a new coordinate system that has the motion of the head, and then use this to isolate the facial movement.

This will take some processing, and also an interactive user interface. You may have seen my tutorial on Creating Interactive MotionBuilder User Interface Tools. You should familiarize yourself with that because this will build on it. Below is the basic idea:

You create a library ‘myLib’ that you load into motionbuilder’s python environment. This is what does the heavy lifting, I say this because you don’t want to do things like send the position of every marker, every frame to your external app via telnet. I also load pyEuclid, a great vector library, because I didn’t feel like writing my own vector class. (MBuilder has no vector class)

Creating ‘myLib’

So we will now create our own library that sits inside MBuilder, this will essentially be a ‘toolkit’ that we communicate with from the outside. Your ‘myLib’ can be called anything, but this should be the place you store functions that do the real processing jobs, you will feed into to them from the outside UI later. The first thing you will need inside the MB python environment is something to cast FBVector3D types into pyEuclid. This is fairly simple:

#casts point3 strings to pyEuclid vectors
def vec3(point3):
	return Vector3(point3[0], point3[1], point3[2])
 
#casts a pyEuclid vector to FBVector3d
def fbv(point3):
	return FBVector3d(point3.x, point3.y, point3.z)

Next is something that will return an FBModelList of models from an array of names, this is important later when we want to feed in model lists from our external app:

#returns an array of models when given an array of model names
#useful with external apps/telnetlib ui
def modelsFromStrings(modelNames):
	output = []
	for name in modelNames:
		output.append(FBFindModelByName(name))
	return output

Now, if you were to take these snippets and save them as a file called myLib.py in your MBuilder directory tree (MotionBuilder75 Ext2\bin\x64\python\lib), you can load them into the MBuilder environment. (You should have also placed pyEuclid here)

casting fbvectors to pyeuclid

It’s always good to mock-up code in telnet because, unlike the python console in MBuilder, it supports copy/paste etc..

In the image above, I get the position of a model in MBuilder, it returns as a FBVector3D, I then import myLib and pyEuclid and use our function above to ‘cast’ the FBVector3d to a pyEuclid vector. It can now be added, subtracted, multiplied, and more; all things that are not possible with the default MBuilder python tools. Our other function ‘fbv()‘ casts pyEuclid vectors back to FBVector3d, so that MBuilder can read them.

So we can now do vector math in motionbuilder! Next we will add some code to our ‘myLib’ that stabilizes the face.

Adding Stabilization-Specific Code to ‘myLib’

One thing we will need to do a lot is generate ‘virtual markers’ from the existing markers. To do this, we need a function that returns the average position of however many vectors (marker positions) it is fed.

#returns average position of an FBModelList as FBVector3d
def avgPos(models):
	mLen = len(models)
	if mLen == 1:
		return models[0].Translation
	total = vec3(models[0].Translation)
	for i in range (1, mLen):
		total += vec3(models[i].Translation)
	avgTranslation = total/mLen
	return fbv(avgTranslation)

Here is an example of avgPos() in use:

Now onto the stabilization code:

#stabilizes face markers, input 4 FBModelList arrays, leaveOrig  for leaving original markers
def stab(right,left,center,markers,leaveOrig):
 
	pMatrix = FBMatrix()
	lSystem=FBSystem()
	lScene = lSystem.Scene
	newMarkers = []
 
	def faceOrient():
		lScene.Evaluate()
 
		Rpos = vec3(avgPos(right))
		Lpos = vec3(avgPos(left))
		Cpos = vec3(avgPos(center))
 
		#build the coordinate system of the head
		faceAttach.GetMatrix(pMatrix)
		xVec = (Cpos - Rpos)
		xVec = xVec.normalize()
		zVec = ((Cpos - vec3(faceAttach.Translation)).normalize()).cross(xVec)
		zVec = zVec.normalize()
		yVec = xVec.cross(zVec)
		yVec = yVec.normalize()
		facePos = (Rpos + Lpos)/2
 
		pMatrix[0] = xVec.x
		pMatrix[1] = xVec.y
		pMatrix[2] = xVec.z
 
		pMatrix[4] = yVec.x
		pMatrix[5] = yVec.y
		pMatrix[6] = yVec.z
 
		pMatrix[8] = zVec.x
		pMatrix[9] = zVec.y
		pMatrix[10] = zVec.z
 
		pMatrix[12] = facePos.x
		pMatrix[13] = facePos.y
		pMatrix[14] = facePos.z
 
		faceAttach.SetMatrix(pMatrix,FBModelTransformationMatrix.kModelTransformation,True)
		lScene.Evaluate()
 
	#keys the translation and rotation of an animNodeList
	def keyTransRot(animNodeList):
		for lNode in animNodeList:
			if (lNode.Name == 'Lcl Translation'):
				lNode.KeyCandidate()
			if (lNode.Name == 'Lcl Rotation'):
				lNode.KeyCandidate()
 
	Rpos = vec3(avgPos(right))
	Lpos = vec3(avgPos(left))
	Cpos = vec3(avgPos(center))
 
	#create a null that will visualize the head coordsys, then position and orient it
	faceAttach = FBModelNull("faceAttach")
	faceAttach.Show = True
	faceAttach.Translation = fbv((Rpos + Lpos)/2)
	faceOrient()
 
	#create new set of stabilized nulls, non-destructive, this should be tied to 'leaveOrig' later
	for obj in markers:
		new = FBModelNull(obj.Name + '_stab')
		newTran = vec3(obj.Translation)
		new.Translation = fbv(newTran)
		new.Show = True
		new.Size = 20
		new.Parent = faceAttach
		newMarkers.append(new)
 
	lPlayerControl = FBPlayerControl()
	lPlayerControl.GotoStart()
	FStart = int(lPlayerControl.ZoomWindowStart.GetFrame(True))
	FStop = int(lPlayerControl.ZoomWindowStop.GetFrame(True))
 
	animNodes = faceAttach.AnimationNode.Nodes
 
	for frame in range(FStart,FStop):
 
		#build proper head coordsys
		faceOrient()
 
		#update stabilized markers and key them
		for m in range (0,len(newMarkers)):
			markerAnimNodes = newMarkers[m].AnimationNode.Nodes
			newMarkers[m].SetVector(markers[m].Translation.Data)
			lScene.Evaluate()
			keyTransRot(markerAnimNodes)
 
		keyTransRot(animNodes)
 
		lPlayerControl.StepForward()

We feed our ‘stab function FBModelLists of right, left, and center stabilization markers, it creates virtual markers from these groups. Then ‘markers’ is all the markers to be stabilized. ‘leavrOrig’ is an option I usually add, this allows for non-destructive use, I have just made the fn leave original in this example, as I favor this, so this option does nothing, but you could add it. With the original markers left, you can immediately see if there was an error in your script. (new motion should match orig)

Creating an External UI that Uses ‘myLib’

Earlier I mentioned Creating Interactive MotionBuilder User Interface Tools, where I explain how to screenscrape/use the telnet Python Remote Server to create an interactive external UI that floats as a window in MotionBuilder itself. I also use the libraries mentioned in the above article.

The code for the facial stabilization UI I have created is here: [stab_ui.py]

I will now step through code snippets pertaining to our facial STAB tool:

def getSelection():
	selectedItems = []
	mbPipe("selectedModels = FBModelList()")
	mbPipe("FBGetSelectedModels(selectedModels,None,True)")
	for item in (mbPipe("for item in selectedModels: print item.Name")):
		selectedItems.append(item)
	return selectedItems

stab uiThis returns a list of strings that are the currently selected models in MBuilder. This is the main thing that our external UI does. The person needs to interactively choose the right, left, and center markers, then all the markers that will be stabilized.

At the left here you see what the UI looks like. To add some feedback to the buttons, you can make them change to reflect that the user has selected markers. We do so by changing the button text.

Example:

def rStabClick(self,event):
	self.rStabMarkers = getSelection()
	print str(self.rStabMarkers)
	self.rStab.Label = (str(len(self.rStabMarkers)) + " Right Markers")

This also stores all the markers the user has chosen into the variable ‘rStabMarkers‘. Once we have all the markers the user has chosen, we need to send them to ‘myLib‘ in MBuilder so that it can run our ‘stab‘ function on them. This will happen when they click ‘Stabilize Markerset‘.

def stabilizeClick(self,event):
	mbPipe('from euclid import *')
	mbPipe('from myLib import *')
	mbPipe('rStab = modelsFromStrings(' + str(self.rStabMarkers) + ')')
	mbPipe('lStab = modelsFromStrings(' + str(self.lStabMarkers) + ')')
	mbPipe('cStab = modelsFromStrings(' + str(self.cStabMarkers) + ')')
	mbPipe('markerset = modelsFromStrings(' + str(self.mSetMarkers) + ')')
	mbPipe('stab(rStab,lStab,cStab,markerset,False)')

Above we now use ‘modelsFromStrings‘ to feed ‘myLib’ the names of selected models. When you run this on thousands of frames, it will actually hang for up to a minute or two while it does all the processing. I discuss optimizations below. Here is a video of what you should have when stabilization is complete:


Kill the keyframes on the root (faceAttach) to remove head motion

Conclusion: Debugging/Optimization

Remember: Your stabilization will only be as good as your STAB markers. It really pays off to create tools to check marker stability.

Sometimes the terminal/screen scraping runs into issues. The mbPipe function can be padded out a lot and made more robust, this here was just an example. If you look at the external python console, you can see exactly what mbPipe is sending to MBuilder, and what it is receiving back through the terminal:

Sending&gt;&gt;&gt; selectedModels = FBModelList()
Sending&gt;&gt;&gt; FBGetSelectedModels(selectedModels,None,True)
Sending&gt;&gt;&gt; for item in selectedModels: print item.Name
['Subject 1-RH1', 'Subject 1-RTMPL']

All of the above can be padded out and optimized. For instance, you could try to do everything without a single lPlayerControl.StepForward() or lScene.Evaluate(), but this takes a lot of MotionBuilder/programming knowhow; it involves only using the keyframe data to generate your matrices, positions etc, and never querying a model.

posted by Chris at 10:10 PM  

Monday, June 16, 2008

High Speed Photography with the Casio EX-F1

At work we got the Casio EX-F1 for animation reference. It’s a really great, cheap solution for those looking to record high speed reference (300/600/1200fps) or hd (1080) video. Here are some videos I took a few weeks ago:

posted by Chris at 5:27 PM  

Sunday, June 15, 2008

RigPorn: Kung Fu Panda

Here are some screens of animation rigs from Kung Fu Panda:

In a shot:

posted by Chris at 2:20 AM  

Tuesday, June 10, 2008

Poor Man’s Mocap

This year my friend Judd gave a talk entitled Uncharted Animation: An In-depth Look at the Character Animation Workflow and Pipeline. In the talk, he showed what they call ‘poor man’s mocap’, where the animator can load up a sequence of frames and they are sync’d with the timeline in Maya. So as someone scrubs an animation, it scrubs the frames of the video. I have duplicated this in a small maxscript available in cryTools. You can grab it in the [Tutorials/Files] section.

posted by Chris at 2:24 PM  

Powered by WordPress