Stumbling Toward 'Awesomeness'

A Technical Art Blog

Wednesday, September 17, 2008

Visualizing MRI Data in 3dsMax

Many of you might remember the fluoroscopic shoulder carriage videos I posted on my site about 4 years ago. I always wanted to do a sequence of MRI’s of the arm moving around. Thanks to Helena, an MRI tech that I met through someone, I did just that. I was able to get ~30 mins of idle time on the machine while on vacation.

The data that I got was basically image data. It’s slices along an axis, I wanted to visualize this data in 3D, but they did not have software to do this in the hospital. I really wanted to see the muscles and bones posed in three dimensional space as the arm went through different positions, so I decided to write some visualization tools myself in maxscript.

At left is a 512×512 MRI of my shoulder; arm raised (image downsampled to 256, animation on 5’s, ). The MRI data has some ‘wrap around’ artifacts because it was a somewhat small MRI (3 tesla) and I am a big guy, when things are close to the ‘wall’ they get these artifacts, and we wanted to see my arm. I am uploading the raw data for you to play with, you can download it from here: [data01] [data02]

Volumetric Pixels

Above is an example of 128×128 10 slice reconstruction with greyscale cubes.

I wrote a simple tool called ‘mriView’. I will explain how I created it below and you can download it and follow along if you want. [mriView]

The first thing I wanted to do was create ‘volumetric pixels’ or ‘voxels’ using the data. I decided to do this by going through all the images, culling what i didn’t want and creating grayscale cubes out of the rest. There was a great example in the maxscript docs called ‘How To … Access the Z-Depth channel’ which I picked some pieces from, it basically shows you how to efficiently read an image and generate 3d data from it.

But we first need to get the data into 3dsMax. I needed to load sequential images, and I decided the easiest way to do this was load AVI files. Here is an example of loading an AVI file, and treating it like a multi-part image (with comments):

on loadVideoBTN pressed do
     (
          --ask the user for an avi
          f = getOpenFileName caption:"Open An MRI Slice File:" filename:"c:/" types:"AVI(*.avi)|*.avi|MOV(*.mov)|*.mov|All|*.*|"
          mapLoc = f
          if f == undefined then (return undefined)
          else
          (
               map = openBitMap f
               --get the width and height of the video
               heightEDT2.text = map.height as string
               widthEDT2.text = map.width as string
               --gethow many frames the video has
               vidLBL.text = (map.numFrames as string + " slices loaded.")
               loadVideoBTN.text = getfilenamefile f
               imageLBL.text = ("Full Image Yeild: " + (map.height*map.width) as string + " voxels")
               slicesEDT2.text = map.numFrames as string
               threshEDT.text = "90"
          )
          updateImgProps()
     )

We now have the height in pixels, the width in pixels, and the number of slices. This is enough data to begin a simple reconstruction.

We will do so by visualizing the data with cubes, one cube per pixel that we want to display. However be careful, a simple 256×256 video is already possibly 65,536 cubes per slice! In the tool, you can see that I put in the original image values, but allow the user to crop out a specific area.

Below we go through each slice, then go row by row, looking pixel by pixel looking for ones that have a gray value above a threshold (what we want to see), when we find them, we make a box in 3d space:

height = 0.0
updateImgProps()
 
--this loop iterates through all slices (frames of video)
for frame = (slicesEDT1.text as integer) to (slicesEDT2.text as integer) do
(
     --seek to the frame of video that corresponds to the current slice
     map.frame = frame
     --loop that traverses y, which corresponds to the image height
     for y = mapHeight1 to mapHeight2 do
     (
          voxels = #()
          currentSlicePROG.value = (100.0 * y / totalHeight)
          --read a line of pixels
          pixels = getPixels map [0,y-1] totalWidth
          --loop that traverses x, the line of pixels across the width
          for x = 1 to totalWidth do
          (
               if (greyscale pixels[x]) < threshold then
               (
                    --if you are not a color we want to store: do nothing
               )
               --if you are a color we want, we will make a cube with your color in 3d space
               else
               (
                    b = box width:1 length:1 height:1 name:(uniqueName "voxel_")
                    b.pos = [x,-y,height]
                    b.wirecolor = color (greyscale pixels[x]) (greyscale pixels[x]) (greyscale pixels[x])
                    append voxels b
               )
          )
          --grabage collection is important on large datasets
          gc()
     )
     --increment the height to bump your cubes to the next slice
     height+=1
     progLBL.text = ("Slice " + (height as integer) as string + "/" + (totalSlices as integer) as string + " completed")
     slicePROG.value = (100.0 * (height/totalSlices))
)

Things really start to choke when you are using cubes, mainly because you are generating so many entities in the world. I added the option to merge all the cubes row by row, which sped things up, and helped memory, but this was still not really the visual fidelity I was hoping for…

Point Clouds and ‘MetaBalls’

I primarily wanted to generate meshes from the data so the next thing I tried was making a point cloud, then using that to generate a ‘BlobMesh’ (metaball) compound geometry type. In the example above, you see the head of my humerus and the tissue connected to it. Below is the code, it is almost simpler than boxes, it just takes finessing edit poly, i have only commented changes:

I make a plane and then delete all the verts to give me a ‘clean canvas’ of sorts, if anyone knows a better way of doing this, let me know:

p = convertToPoly(Plane lengthsegs:1 widthsegs:1)
p.name = "VoxelPoint_dataSet"
polyop.deleteVerts $VoxelPoint_dataSet #(1,2,3,4)

That and when we created a box before, we now create a point:

polyop.createVert $VoxelPoint_dataSet [x,-y,height]

This can get really time and resource intensive. As a result, I would let some of these go overnight. This was pretty frustrating, because it slowed the iteration time down a lot. And the blobMesh modifier was very slow as well.

Faking Volume with Transparent Planes


I was talking to Marco at work (Technical Director) and showing him some of my results, and he asked me why I didn’t just try to use transparent slices. I told him I had thought about it, but I really know nothing about the material system in 3dsMax, much less it’s maxscript exposure. He said that was a good reason to try it, and I agreed.

I started by making one material per slice, this worked well, but then I realized that 3dsMax has a limit of 24 materials. Instead of fixing this, they have added ‘multi-materials’, which can have n sub-materials. So I adjusted my script to use sub-materials:

--here we set the number of sub-materials to the number of slices
meditMaterials[matNum].materialList.count = totalSlices
--you also have to properly set the materialIDList
for m=1 to meditMaterials[matNum].materialList.count do
(
     meditMaterials[matNum].materialIDList[m] = m
)

Now we iterate through, generating the planes, assigning sub-materials to them with the correct frame of video for the corresponding slice:

p = plane name:("slice_" + frame as string) pos:[0,0,frame] width:totalWidth length:totalHeight
p.lengthsegs = 1
p.widthsegs = 1
p.material = meditMaterials[matNum][frame]
p.castShadows = off
p.receiveshadows = off
meditMaterials[matNum].materialList[frame].twoSided = on
meditMaterials[matNum].materialList[frame].selfIllumAmount = 100
meditMaterials[matNum].materialList[frame].diffuseMapEnable = on
newMap = meditMaterials[matNum].materialList[frame].diffuseMap = Bitmaptexture filename:mapLoc
newmap.starttime = frame
newmap.playBackRate = 1
newmap = meditMaterials[matNum].materialList[frame].opacityMap = Bitmaptexture fileName:mapLoc
newmap.starttime = frame
newmap.playBackRate = 1
showTextureMap p.material on
mat += 1

This was very surprising, it not only runs fast, but it looks great. Of course you are generating no geometry, but it is a great way to visualize the data. The below example is a 512×512 MRI of my shoulder (arm raised) rendered in realtime. The only problem I had was an alpha-test render error when viewed directly from the bottom, but this looks to bea 3dsMax issue.


I rendered the slices cycling from bottom to top. In one MRI the arm is raised, in the other, the arm lowered. The results are surprisingly decent. You can check that video out here. [shoulder_carriage_mri_xvid.avi]


You can also layer multiple slices together, above I have isolated the muscles and soft tissue from the skin, cartilage and bones. I did this by looking for pixels in certain luminance ranges. Above in the image I am ‘slicing’ away the white layer halfway down the torso, below you can see a video of this in realtime as I search for the humerus; this is a really fun & interesting way to view it:

Where to Go From here

I can now easily load up any of the MRI data I have and view it in 3d, though I would like to be able to better create meshes from specific parts of the data, in order to isolate muscles or bones. To do this I need to allow the user to ‘pick’ a color from part of the image, and then use this to isolate just those pixels and remesh just that part. I would also like to add something that allows you to slice through the planes from any axis. That shouldn’t be difficult, just will take more time.

posted by Chris at 3:48 PM  

Friday, September 5, 2008

Talking about Light Transport

EDIT: I would like this to be a ‘living document’ of sorts, please send me terms and definitions and feel free to correct mine!

Whether you’re a technical artist in games or film, when trying to create realistic scenes and characters, the more you know about how light works and interacts with surfaces in the world, and the more reference of this you have, the better you can explain why you think an image looks ‘off’.

You are an technical artist. You need to be able to communicate with technical people using terminology they understand. We often act as bridges between artists and programmers, it is very important for us to be able to communicate with both appropriately.

Light transport is basically the big nerd word for how light gets from one place to another, and scattering is usually how surfaces interact with light.

You can see something in a rendered image and know it looks ‘wrong’, but it’s important to understand why it looks wrong, and be able to accurately explain to the programming team how it can be improved upon. To do this you should be able to:

1) present examples of photographic reference

2) communicate with general terms that others can understand

General Terminology

The following terms come from optics, photography and art, you should not only understand these, but use them when explaining why something does not look ‘right’. I will give both the technical term and my shortest approximation:

Specular Reflection – sharp reflection of light from a surface that somewhat retains an image (eg. glossy)
Diffuse Reflection – uneven reflection of light from a surface that does not retain the image (eg. matte)
Diffuse Interreflection – light reflected off other diffuse objects
Diffraction – what happens to a wave when it hits an obstacle, this could be an ocean wave hitting a jetty, or a light wave hitting a grate.
Depth od Field – the area in an image that is in focus
Bokeh – the blurry background in a photo that is not in focus
Chromatic Abberation – the colored fringes around an object or light refracted through an object, it’s because certain wavelengths of light get bent ‘out-of-sync’, i usually think of it as an old projector or monitor that is misaligned; that’s what this effect looks like.
Caustics – light rays shined through a refractive object onto another surface
Angle of Incidence – this is actually the angle something is off from ‘straight on’, but we mainly use this when talking about shaders or things that are view-dependent. If you were to draw a line from your eyes to a surface, the angle between this and it’s ‘normal’ is the ‘angle of incidence’. Car paint whose color changes as you walk around it is a good example: it changes based on the angle you see it. Just remember, your head doesn’t have to move, the object can move, changing the angle between your sightline and the surface.
Refractive Index (Refraction) – how light’s direction changes when moving through an object. the refractive index of water is 1.3, glass has a higher refractive index at 1.4 to 1.6
Reflection – the changing of direction of light, usually casting light onto something, like the camera or our eyes
Glossiness – the ability of a surface to reflect specular light, the smaller amount of specular light reflected usually makes something look ‘glossier’
Ray – think of a ray as a single beam of light; a single particle. This particle moves in a ‘ray’, when we talk about ‘ray tracing’ we mean tracing the path of a ray from a light source through a scene.
Fresnel – pronounced ‘fre-nel’, it is the amount of view dependent reflectance on a surface. a great example is rim lighting, but fresnel effects are used to fake a fuzzy look, x-ray effects, light reflected off the ocean, etc.
Aerial Perspective – this is how things get lighter as they recede into the distance, the more air, or ‘atmosphere’ between you and the object (mountain, building, etc) the lighter it is visually. I grew up in Florida, we don’t have much of this effect at all due to elevation and clear skies.
High Dynamic Range Imaging (HDR) – this just means you are dealing with more light data than a normal image. An HDR image has a larger range of light information stored in it. With today’s prosumer DSLR’s it is possible to capture 14bit images that theoretically contain ’13-14 stops’ of linear data. A digital example could be the sky in the game Crysis, it was a dynamic HDR skydome, this meant that the game engine was computing more light than could be displayed on the monitor. In these situations, this data is tonemapped to create visually interesting lighting situations.
Tone Mapping – this is how you can ‘map’ one set of colors onto another, in games it generally means ‘mapping’ high dynamic range data into a limited dynamic range, like a tv set or monitor. This can be done by ‘blooming’ areas that are overbright and other various techniques.\
Bloom – ‘bloom’ is the gradient fringe you see around really brightly lit areas in an image, like a window to a bright sky seen from inside a dark room.
Albedo – the extent to which a surface diffusely reflects light from the sun.
Afterimage Effect – this belongs to a groups of effects I call ‘accumulation-buffer effects’. the after-image effect visually ‘burns-in’ the brightest parts of a previous image, simulating the effect our eyes have when adjusting to bright light.
Deferred Rendering – this is a type of rendering where you render parts of the image to framebuffer storage instead of rendering directly to the pixel-output. Deferred rendering generally allows you ot use many more light sources in real-time rendering. One problem deferred rendering has is that it cannot properly deal with transparent items.
Scanline Rendering – Scanline rendering is a very old technique where you render one line of pixels after another. Pixar’s Renderman is a scanline renderer, but also the NintendoDS uses scanline rendering.
Skylight (or Diffuse Sky Radiation) – this is the fancy term for light that comes not from the sun, but is reflected from the sky. It is what makes sunlight on earth inherently blue, or orange.
Scattering (including Sub-Surface Scattering) – this just means how particles are ‘scattered’ or deviate from an original path. In sub-surface-scattering, light enters an object, and bounces around inside (sub-surface). This leads to things like the orange/red color of your ear when there it a light behind it.
Participating Media – the way a group of particles can effect light transport through their volume, not only reflect or refract light, but scatter it. Things like glass, water, fog and smoke are all participating media.
Ambient Occlusion – this is a shading effect where occluded areas are shaded, much like access maps of the old days, cracks and areas where light would have a hard time ‘getting into’ are shaded.
Screen Space Ambient Occlusion – a rendering technique that fakes ambient occlusion with some z-buffer trickery. By taking the distances between objects in a scene, the algorithm generates approximated occlusion data in real time. (first used on Crysis!)
Global Illumination – a way of rendering where you measure light bounces, as the light bounces around a scene, this generates indirect lighting. An example of this would be how a red ball next to a white wall will cast red light onto the wall.
Z-Buffer – is where 3d depth information is stored in a 2d image. A 16bit z-buffer has 65536 levels of depth, while an 8 bit has 256. Items on the same level cause flickering or ‘z-fighting’.
Z-Fighting – this occurs when polygons have similar z-buffer values, it is a term you should know when dealing with virtual cameras, not real ones. You can see this flickering when you create 2 co-planer planes on top of each other in a 3d app. To eliminate z-fighting you can use 24 or 32bit zbuffers.
Frustrum – everything in the camera’s field of view; generally the entire volume that the camera can see.
Environment Reflection – the way of faking a reflection by applying an image to a surface, this can be a spherical map, cube map, etc. Some environmental reflections (cubemaps) can be generated at rutime as you move an object around. (most notably in racing games)
Cubic Environment Mapping – a way of generating an environmental reflection map with six sides that are mapped onto a cube, recreating the reflection of the environment around an object.
SkyBox – creating a ‘sky’ in a virtual scene by enclosing the entire scene in a large box with images on 5 sides.

Here are some example sentences:

Artist: This place here where the light shines on the surface is too small, it makes my object look too wet.
Technical Artist: The surface is too glossy, as a result, the area of specular reflection where you see the light is very small.

Artist: Like in the photos we took, things in the distance should be lighter, in the engine can we make things lighter as they get farther away?
Technical Artist: As things recede into the distance, aerial perspective causes them to become lighter, to acheive this we should increase the environment fog slightly.

Taking Photographic Reference

I feel every technical artist who assesses visual output should own a proper Digital Single Lens Reflex camera (DSLR), no matter what quality or how old. This will force you to understand and work with many of the terms above. The artist in you will want to take good pictures, and this is much more than good composition, you are essentially recording light. You will need to learn a lot to be able to properly meter and record light in different situations. Because it’s digital, you will be able to iterate and learn fast, recognizing cause and effect relationships the same way we do with the realtime feedback of scripting languages in 3d apps.

posted by Chris at 8:17 AM  

Thursday, August 7, 2008

Three Headed Monkey Magics!

woah!


I am currently in the US, home for the first time in 8 months. I had some packages here, one of which my now (ex)girlfriend had said was too important to mail to Germany, despite the sketch of a three headed monkey on the shipping box. Behold: the original Secret of Monkey Island PC game, signed by Tim Schafer, Ron Gilbert and Dave Grossman! Tim was nice enough to arrange this, we met and he showed us around his studio, Double Fine, at GDC this year. I had to fight hard to hold back the fanboy-ness!

posted by Chris at 9:10 AM  

Tuesday, August 5, 2008

The Price of Tech: Lost in Tran$lation

I grew up in the US though I now live in Europe. This is just a short post about something that I find really unfair and frustrating: International Pricing of High End Tech Items. Let’s check out the new Nikon D700:

Nikon d700 Germany: 2,599 eur

Nikon d700 United States: 2,999 usd – 1,825 eur

Nikon d200 Britain 1892 pounds – 2,383 eur

It certainly would seem that Germany is getting the short end of that stick. In many cases, people in Europe could fly to the US, buy electronics, and come back for the price of getting them here. And many people do.

Not to menaion many companies have better warranties in the US where the market is more competitive. (Example: Many Nikon cameras and lenses have a 5 year warranty in the US and 2 year here in Germany)

When the Wii came out int he US, it was 250usd, when it came out here, it was 250eur. The eur was riding high, in the US it was impossible to get a Wii, hwever, they were readily available in all stores here; leading many to speculate that it was because Nintendo was making 400usd per Wii (250eur) in Europe. This isn’t just about inflation, some items are priced 1:1 or a little over, 3dsMax below, but others are more ridiculous, Photoshop for example.

3dsMax 2009

3dsMax 2009 Germany: 3,900 eur – (4,641 eur with mandatory tax)

3dsMax 2009 United States: 3,495 usd – 2,257 eur

Photoshop CS3

Photoshop CS3 Germany: 1,027 eur

Photoshop CS3 United States: 607 usd390 eur

Photoshop CS3 Britain: 500 pounds – 629 eur

The above is just completely inane. Some companies will tell you they have to charge a premium on products in Europe because it costs extra to localize them. But come on.. Stuff like the above is ridiculous.

When you start looking at really high end tech, items that only have one distributor in Europe, but many in the US; like motion capture systems, the difference in pricing due to 1) inflation 2) companies just charging more in europe and 3) single distributors in a region having no competition, makes it inhibitively expensive (we’re talking tens of thousands of dollars price difference). It would be cheaper to set up a company in the US just to make these purchases, and I am sure people do.

But seriously, Adobe, you should be ashamed of yourselves.

posted by Chris at 3:33 AM  

Friday, August 1, 2008

MGS4 Cluster Constraint Setup

From Ideas to Reality with XSI’s Cluster Constraints

Thanks to my brother, Mike, for translating  this from the original japanese [here]

When asked about which features of XSI helped the most on this project, Hideki Sasaki (Facial Animation Set Up Lead) came back with the rather surprising answer, “There were many but in regards to facial animation, cluster constraints really saved us.” In our facial rig setups, every point-cluster of your target shape is tied to bones using cluster constraints. Cluster constraints also were extremely useful in the following situations:

Since in MGS4 we were really trying to lighten the processing, on the PS3 we employed a method where tangent colors change only with the rotation of bones. In other words, if simply constrained to coordinates, in animation it will behave correctly, but the tangent color will not change. Basically, you run into a dilemma where shading goes from its default state to a one where it will no longer change. However, by using cluster constraints to constrain both normal and tangent lines the correct rotation values will be input, and that’s how we accomplished the shading.

(this sounds really interesting, i guess they are talking about smoothing angle tangents? In many engines like the CryEngine, the smoothing angle is based on the character’s default pose at export and never changes. This makes it sound like that exported cluster data to ‘drive’ the smoothing angle in realtime)

Furthermore, nearly all fluctuating objects attached to the character’s clothing, in cutscenes and gameplay, are done by the PS3’s simulation engine. That being said, there are some cases in cutscenes with intense action where it’s difficult to simulate. In those cases we use animation simulated in XSI’s Syflex. The basic workflow in those situations is as follows:

1. To express fluctuations in the clothing, make a simulation in Syflex

2. Convert the cached simulation results into shape targets

3. Constrain bones to the points on your shape controlled object with cluster constraints

4. Bone envelope the final model to be used on the PS3 (Basically the same idea as a facial rig)

(Baking arbitrary data to bones ftw!)

The advantage of using this type of control is, even if you temporarily get a little caving in or some kind of flaw in the simulation result, you are able to apply corrections with “Secondary Shape Mode” at stage 2 of the workflow.

It’s possible to edit the shape’s geometry using vertex shift; you can also use smooth and push to fix little imperfections if needed. It goes without saying that the results of these intuitive adjustments will be reflected in the envelope control’s PS3 data as well.

Sasaki explains, “You can set cluster constraints for all components, vertex, polygon and edge. I believe XSI is the only one that comes standard with support for constraining both normals and tangents. Without the help of these cluster constraint functions we could have never accomplished techniques like cross-simulation transfer to bones, or our ideas concerning facial rig set up.”

(they export/sync cluster rig element data to engine)

posted by Chris at 12:54 PM  

Thursday, July 31, 2008

MGS4 Character Pipeline

Character Creation Pipeline

Thanks to my brother, Mike, for translating  this from the original japanese [here]

The hero, Snake, and nearly all other characters we animate on the PS3 and make an appearance in the game are restrained to the range of about 5 thousand to 1 million polygons (including the face). Also, in both gameplay and “cutscenes” the same resolution polygon characters are used. This allows for seamless transition between the gameplay and cutscenes and makes it easier for the player to get emotionally involved in the reality of the game.

Furthermore, for all other characters, except crowds, the same resolution of polygon characters are used in game as well as in cutscenes. Separate from the resolution models used on the PS3, high rez data is modeled at the same time to generate a normal map. Wrinkles in clothing and other details are expressed through this normal map, created from the high rez model.

Of all the bones within the character’s body, the number that contain and are driven by animation data is roughly around 21. But, in reality a number of helper (auxiliary) bones are used to supplement motions like twisting in the knees, elbows, arms and legs.  These however are not driven by animation data. Instead, they reference values of the basic animation driven joints and move in like manner.


The same method is employed on the PS3, not just in XSI; all you have to do is extract the helper bones’ definition files from the XSI data and you can achieve the same kind of control on the PS3 as well. (Awesome! Rig syncing constraints and driven bones between DCC app and game engine)
Since there is no actual motion data stored inside the driven-bones, you are able to not only limit the data volume but even in the event that you need to add or delete helper bones, there’s no need to reconvert the motion data- you can just adjust the model data instead.

posted by Chris at 8:27 AM  

Thursday, July 31, 2008

MGS4 Facial Animation

Shockingly Realistic Facial Animation

Thanks to my brother, Mike, for translating  this from the original japanese [here]

One of the most notable things about MGS4 is its world-leading cutting edge facial animation. Exactly how were these real-to-life facial expressions created?

Since the Metal Gear Solid series is lip-sinked for localization, from a workload standpoint voice analysis software is employed

In MGS4 for example, lip-sinking for Japanese and English were done seperately with different voice analysis software.Other emotions and expressions besides lip-sinking were animated by hand. In nearly all cases, the expression and phoneme elements were worked on together simultaneously, reducing interference and allowing MGS4 to achieve its simultanious world release.

When doing voice analysis, it’s necessary to set parameters for both expression components (i.e., anger, smile, etc.) and phoneme components (all languages) seperately. After setting this up, we need to see how it behaves as a rig. It’s possible to use parameters for the rotation and movement of bones; however, the rig can become more complicated and it can also become more difficult to predict how the bones with transform/change once enveloped. In other words, when facial animation is done by only controlling the bones, ituitively the designer’s job becomes more difficult and he runs into the following two problems: 1) expressing the behavior of bones, and 2) setting parameters for phonemes.

However, with shape animation (even though it has the drawback of linear interpolation) it’s extremely easy to set up parameters for all your phonemes and
expressions. Most of all, it’s adventagous in that the desiger will be able to intuitively predict the result.

For these reasons, this time on our rig we used bone-driven animation based on the results of various parameter shapes.

With this set up, using voice analysis automated animation (not just the mouth, but automatic animation of the tounge and throat phonemes as well) and hand animation for emotions, we are able to achieve an abundance of realistic expressions.

In the following flash movie you can see how smooth muscular expressions are achieved through superb rig setup

Flash Movie:

Facial rig setup pipeline

————————————————
1. Lo-poly model driven by shape animation
2. Above that, the constrained bones
3. Polygonal mesh enveloped to the bones
4.Tangent color
5. OpenGL display (wrinkles expressed also with normal map)

————————————————

Expressions, phonemes, eyes (eyebrows), and shader driven wrinkle animation are all tab selectable.
Through the combination of various parameters we can create life-like expressions like those shown above.

The most suprising thing is, we developed a tool that automatically sets up this facial rig that allows such sophisticated control. In other words, if you enter the facial model data and run the tool it will automatically identify the optimal position for bones– in this system the tool will create controls that include the preset parameters for emotions. (a smily face, an angry face, etc.) To perform the automated facial rigging, the facial data’s topology information needs to be standardized ahead of time. If you adhere to this one rule, your set up can be done automatically, and all that’s left to do is for the designer to fine-tune the controls and you have a constructed enviorment where you can get right into your facial animation.

Next, a rig that controls the movement of the eyeball and surrounding muscles can also be generated automatically using this tool. Since the area around the eye, like the area surrounding the mouth, is controlled by the simultanious usage of shapes and bones, when you move the eyeball locater you get smooth muscular movement. What’s more, even if you edit the shape, or redefine the configuration of the outline of the eye, it doesn’t disrupt the expression of brow wrinkles or the blinking of the eye in any way.

Behind all the characters that make an appearance in this game, and appeal to the player’s emotions, we have implemented this set up and animation system; and, through it we are able to increase and maintain a high quality user experience.

posted by Chris at 1:14 AM  

Tuesday, July 29, 2008

3dsMax 2008 Node Event System: Does Not Exist

After writing a bit of code to leverage the new Node Event System, and then not be able to get it to work properly, I found a post by someone from Autodesk saying that it is not present in Max 2008, however it is in the documentation. This is somewhat frustrating, I hope this post saves you time and frustration.

posted by Chris at 2:30 PM  

Monday, July 28, 2008

Gleaning Data from the 3dsMax ‘Reaction Manager’

This is something we had been discussing over at CGTalk, we couldn’t find a way to figure out Reaction Manager links through maxscript. It just is not exposed. Reaction Manager is like Set Driven in Maya or a Relation Constraint in MotionBuilder. In order to sync rigging components between the packages, you need to be able to query these driven relationships.

I set about doing this by checking dependencies, and it turns out it is possible. It’s a headache, but it is possible!

The problem is that even though slave nodes have controllers with names like “Float_Reactor”, the master nodes have nothing that distinguishes them. I saw that if I got dependents on a master node (it’s controllers, specifically the one that drives the slave), that there was something called ‘ReferenceTarget:Reaction_Master‘:

refs.dependents $.position.controller
#(Controller:Position_Rotation_Scale, ReferenceTarget:Reaction_Master, Controller:Position_Reaction, ReferenceTarget:Reaction_Set, ReferenceTarget:Reaction_Manager, ReferenceTarget:ReferenceTarget, ReferenceTarget:Scene, Controller:Position_Rotation_Scale, $Box:box02 @ [58.426544,76.195091,0.000000], $Box:box01 @ [-42.007244,70.495964,0.000000], ReferenceTarget:NodeSelection, ReferenceTarget:ReferenceTarget, ReferenceTarget:ReferenceTarget)

This is actually a class, as you can see below:

exprForMAXObject (refs.dependents $.position.controller)[2]
"<<Reaction Master instance>>"
 
getclassname (refs.dependents $.position.controller)[2]
"Reaction Master"

So now we get the dependents of this ‘Reaction Master’, and it gives us the node that it is driving:

refs.dependentNodes (refs.dependents $.position.controller)[2]
#($Box:box02 @ [58.426544,76.195091,0.000000])

So here is a fn that gets Master information from a node:

fn getAllReactionMasterRefs obj =
(
	local nodeRef
	local ctrlRef
	for n = 1 to obj.numSubs do
	(
		ctrl = obj[n].controller
		if (ctrl!=undefined) then
		(
			for item in (refs.dependents ctrl) do
			(
				if item as string == "ReferenceTarget:Reaction_Master" then
				(
					nodeRef = (refs.dependentNodes item)
					ctrlRef = ctrl
				)
			)
			getAllReactionMasterRefs obj[n]
		)
	)
	return #(nodeRef, ctrlRef)
)

The node above returns:

getAllReactionMasterRefs $
#(#($Box:box02 @ [58.426544,76.195091,0.000000]), Controller:Position_Rotation_Scale)

The first item is an array of the referenced node, and the second is the controller that is driving *some* aspect of that node.

You now loop through this node looking for ‘Float_Reactor‘, ‘Point3_Reactor‘, etc, and then query them as stated in the manual (‘getReactionInfluence‘, ‘getReactionFalloff‘, etc) to figure out the relationship.

Here is an example function that prints out all reaction data for a slave node:

fn getAllReactionControllers obj =
(
	local list = #()
	for n = 1 to obj.numSubs do
	(
		ctrl = obj[n].controller
		if (ctrl!=undefined) then
		(
			--print (classof ctrl)
			if (classof ctrl) == Float_Reactor \
			or (classof ctrl) == Point3_Reactor \
			or (classof ctrl) == Position_Reactor \
			or (classof ctrl) == Rotation_Reactor \
			or (classof ctrl) == Scale_Reactor then
			(
				reactorDumper obj[n].controller data
			)
		)
		getAllReactionControllers obj[n]
	)
)

Here is the output from ‘getAllReactionControllers $Box2‘:

[Controller:Position_Reaction]
ReactionCount - 2
ReactionName - My Reaction
    ReactionFalloff - 1.0
    ReactionInfluence - 100.0
    ReactionStrength - 1.2
    ReactionState - [51.3844,-17.2801,0]
    ReactionValue - [-40.5492,-20,0]
ReactionName - State02
    ReactionFalloff - 2.0
    ReactionInfluence - 108.665
    ReactionStrength - 1.0
    ReactionState - [65.8385,174.579,0]
    ReactionValue - [-48.2522,167.132,0]

Conclusion
So, once again, no free lunch here. You can loop through the scene looking for Masters, then derive the slave nodes, then dump their info. It shouldn’t be too difficult as you can only have one Master, but if you have multiple reaction controllers in each node effecting the other; it could be a mess. I threw this together in a few minutes just to see if it was possible, not to hand out a polished, working implementation.

posted by Chris at 4:42 PM  

Monday, July 28, 2008

Fixing Clipboard Problems in Photoshop

Over the past few years I have noticed that Photoshop often, usually after it is left idling for a few hours or days, no longer imports the windows clipboard.

Here is a fix if you don’t mind getting your hands dirty in the registry:

[HKEY_CURRENT_USER\Software\Adobe\Photoshop\9.0]
"AlwaysImportClipboard"=dword:00000001

The above is for photoshop cs2, depending on your version you will have to look in different reg locations. There is also a problem when you hit a ‘size limit’ for an incoming clipboard image, and Photoshop dumps it. This can also be circumvented by editing the registry:

[HKEY_CURRENT_USER\Software\Adobe\Photoshop\9.0]
"MaxClipSize"=dword:00000000
posted by Chris at 10:24 AM  

Friday, July 11, 2008

Simple Perforce Animation Browser/Loader for MotionBuilder

This is a simple proof-of-concept showing how to implement a perforce animation browser via python for MotionBuilder. Clicking an FBX animation syncs it and loads it.

The script can be found here: [p4ui.py], it requires the [wx] and [p4] libraries.

Clicking directories goes down into them, clicking fbx files syncs them and loads them in MotionBuilder. This is just a test, the ‘[..]’ doesn’t even go up directories. Opening an animation does not check it out, there is good documentation for the p4 python lib, you can start there; it’s pretty straight forward and easy: sure beats screen scraping p4 terminal stuff.

You will see the following, you should replace this with the p4 location of your animations, this will act as the starting directory.

	path1 = 'PUT YOUR PERFORCE ANIMATION PATH HERE (EXAMPLE: //DEPOT/ANIMATION)'
	info = p4i.run("info")
	print info[0]['clientRoot']

That should about do it, there are plenty of P4 tutorials out there, my code is pretty straight forward. The only problem was where I instanced it, be sure to instance it with something other than ‘p4’, I did this and it did not work, using ‘p4i’ it did without incident:

p4i = P4.P4()
p4i.connect()
posted by Chris at 6:45 PM  

Sunday, June 29, 2008

Debugging a Bluescreen

This is a tip that a coworker (Tetsuji) showed me a year ago or so, I was pretty damn sure my ATI drivers were bluescreening my system, but I wanted to hunt down proof. So you have just had a bluescreen and your pc rebooted. Here’s how to hunt down what happened.

First thing you should see when you log back in is this:

It’s really important that you not do anything right now; especially don’t click one of those buttons. Click the ‘click here‘ text ad then you will see this window.

Ok, so this doesn’t tell us much at all. We want to get the ‘technical information’, so click the link for that and you will see something like this:

Here is why we did not click those buttons before; when you click those buttons, these files get deleted. So copy this path and go to this folder. Copy the contents elsewhere, and close all those windows. So you now have these three files:

The ‘dmp’ file (dump file) will tell us what bluescreened our machine, but we need some tools to read it. Head on over to the Microsoft site and download ‘Debugging Tools for Windows’ (x32, x64). Once installed, run ‘WinDbg‘.  Select File->Open Crash Dump… and point it at your DMP file. This will open, scroll down and look for something like this:

In this example the culprit was ‘pgfilter.sys‘, something installed by ‘Peer Guardian’, a hacky privacy protection tool I use at home. There is a better way to cut through a dump file, you can also type in ‘!analyze -v‘, this will generate something like this:

In this example above you see that it’s an ATI driver issue, which I fixed by replacing the card with an nvidia and tossing the ATI into our IT parts box (junkbox).

posted by Chris at 5:01 PM  

Sunday, June 29, 2008

You Suck At Photoshop

You Suck at Photoshop always cracks me up, you might like it as well

posted by Chris at 1:47 PM  

Monday, June 23, 2008

Under the Hood: The Inner Workings of Animation on Assassin’s Creed

Under the Hood: The Inner Workings of Animation on Assassin’s Creed

Sylvain Bernard, Animation Director, Ubisoft

Animation:

  • All animation was done in 3dsMax with Biped
    • ‘Our animators do not like MotionBuilder for creating animation’
    • Would have meant porting all their tools to MotionBuilder
  • MotionBuilder was only used to clean mocap
  • They decided to ignore foot sliding in order to concentrate on a better performance and gameplay experience
  • They stressed the importance of Technical Animators
  • Up to 15 animators worked on Assassin’s Creed
  • 40% of all animation was hand keyed
  • There is no procedural animation(not counting blending)
  • They showed the entire move tree
    • sprint, run, walk, jog, slow walk, banking, strafe, 4 idles
    • 168 ground animations for altair locomotion group
    • 122 anims in climbing group

Production:

  • 90% of work was integrating animation into the environment
  • The key was pairing animators with programmers
    • Sit them together
  • Before they started one main goal of the project was ‘to do as much animation as we could’
    • They saw Next Gen as an animation showcase
  • They prototype gameplay in max to show programmers how the game should look/feel
    • How AI should react
    • How a character should interact with the environment
  • ‘In the beginning designers were given free reign to make anything they wanted, in the end we had to make a 20 page document telling them how to create levels’
    • Too much freedom leads to chaos
  • Stressed the need to involve animators in animation system development

Pipeline/Rigging:

  • All characters share the same skeleton (male/female npc, altair)
    • ‘the art director wanted characters of different heights, we said ‘no”
    • made mocking things up easy
  • They call their movement locator the ‘magic bug’
    • Locators ‘joined together’ when two characters interacted
  • NPCs use simple hinge constraints for ponytails and things
  • They had ‘no working AI for almost the first two years‘ of the project
  • They do edge detection on the collision mesh
  • Auto nav mesh generation
  • Auto ‘animation object’ placement
posted by Chris at 12:34 PM  

Sunday, June 22, 2008

3D Models not Subject to Copyright

I saw this over at slashdot:

“The US Court of Appeals for the Tenth Circuit has affirmed (PDF) a ruling that a plain, unadorned wireframe model of a Toyota vehicle is not a creative expression protected under copyright law. The court analogized the wire-frame models to photographs: the owner of an object does not have a copyright in all images of the object, but a photographer may have a limited copyright over a particular image based on artistic choices such as costumery, lighting, posing, etc. Thus, the modelers could only copyright any ‘incremental contribution’ they made to Toyota’s vehicles; in the case of plain models, there was nothing new to protect. This could be a two-edged sword — companies that produce goods may not be able to stop modelers from imaging those products, but modelers may not be able to prevent others from copying their work.”

This will have some interesting ramifications. And I don’t just mean for the Limbo of the Lost guys. (j/k)

posted by Chris at 11:09 PM  

Sunday, June 22, 2008

AutoDesk Masterclass: Python for MotionBuilder Artists

In 2007, my friend Jason gave an AutoDesk Masterclass entitled: Python Scripting for MotionBuilder Artists. It has been available online and I would like to mention it for anyone who is interested in Python and MotionBuilder.

Here’s what you get for only 40 bucks:

118 page PowerPoint presentation
72 page Full Documentation
21 Scripts
6 Scenes
2 text files
8 .mov videos capturing 1 hour 20 minute lecture

Buy it here: Python Scripting for MotionBuilder Artists

posted by Chris at 1:35 PM  

Saturday, June 21, 2008

Facial Stabilization in MotionBuilder using Python

Facial motion capture stabilization is basically where you isolate the movement of the face from the movement of the head. This sounds pretty simple, but it is actually a really difficult problem. In this post I will talk about the general process and give you an example facial stabilization python script.

Disclaimer: The script I have written here is loosely adapted from a MEL script in the book Mocap for Artists, and not something proprietary to Crytek. This is a great book for people of all experience levels, and has a chapter dedicated to facial mocap. Lastly, this script is not padded out or optimized.

To follow this you will need some facial mocap data, there is some freely downloadable here at www.mocap.lt. Grab the FBX file.

andy serkis - weta head stabilization halo

Stabilization markers

Get at least 3 markers on the actor that do not move when they move their face. These are called ’stabilization markers’ (STAB markers). You will use these markers to create a coordinate space for the head, so it is important that they not move. STAB markers are commonly found on the left and right temple, and nose bridge. Using a headband and creating virtual markers from multiple solid left/right markers works even better. Headbands move, it’s good to keep this in mind, above you see a special headrig used on Kong to create stable markers.

It is a good idea to write some tools to help you out here. At work I have written tools to parse a performance and tell me the most stable markers at any given time, if you have this data, you can also blend between them.

Load up the facial mocap file you have downloaded, it should look something like this:

In the data we have, you can delete the root, the headband markers, as well as 1-RTMPL, 1-LTMPL, and 1-MNOSE could all be considered STAB markers.

General Pipeline

As you can see, mocap data is just a bunch of translating points. So what we want to do is create a new coordinate system that has the motion of the head, and then use this to isolate the facial movement.

This will take some processing, and also an interactive user interface. You may have seen my tutorial on Creating Interactive MotionBuilder User Interface Tools. You should familiarize yourself with that because this will build on it. Below is the basic idea:

You create a library ‘myLib’ that you load into motionbuilder’s python environment. This is what does the heavy lifting, I say this because you don’t want to do things like send the position of every marker, every frame to your external app via telnet. I also load pyEuclid, a great vector library, because I didn’t feel like writing my own vector class. (MBuilder has no vector class)

Creating ‘myLib’

So we will now create our own library that sits inside MBuilder, this will essentially be a ‘toolkit’ that we communicate with from the outside. Your ‘myLib’ can be called anything, but this should be the place you store functions that do the real processing jobs, you will feed into to them from the outside UI later. The first thing you will need inside the MB python environment is something to cast FBVector3D types into pyEuclid. This is fairly simple:

#casts point3 strings to pyEuclid vectors
def vec3(point3):
	return Vector3(point3[0], point3[1], point3[2])
 
#casts a pyEuclid vector to FBVector3d
def fbv(point3):
	return FBVector3d(point3.x, point3.y, point3.z)

Next is something that will return an FBModelList of models from an array of names, this is important later when we want to feed in model lists from our external app:

#returns an array of models when given an array of model names
#useful with external apps/telnetlib ui
def modelsFromStrings(modelNames):
	output = []
	for name in modelNames:
		output.append(FBFindModelByName(name))
	return output

Now, if you were to take these snippets and save them as a file called myLib.py in your MBuilder directory tree (MotionBuilder75 Ext2\bin\x64\python\lib), you can load them into the MBuilder environment. (You should have also placed pyEuclid here)

casting fbvectors to pyeuclid

It’s always good to mock-up code in telnet because, unlike the python console in MBuilder, it supports copy/paste etc..

In the image above, I get the position of a model in MBuilder, it returns as a FBVector3D, I then import myLib and pyEuclid and use our function above to ‘cast’ the FBVector3d to a pyEuclid vector. It can now be added, subtracted, multiplied, and more; all things that are not possible with the default MBuilder python tools. Our other function ‘fbv()‘ casts pyEuclid vectors back to FBVector3d, so that MBuilder can read them.

So we can now do vector math in motionbuilder! Next we will add some code to our ‘myLib’ that stabilizes the face.

Adding Stabilization-Specific Code to ‘myLib’

One thing we will need to do a lot is generate ‘virtual markers’ from the existing markers. To do this, we need a function that returns the average position of however many vectors (marker positions) it is fed.

#returns average position of an FBModelList as FBVector3d
def avgPos(models):
	mLen = len(models)
	if mLen == 1:
		return models[0].Translation
	total = vec3(models[0].Translation)
	for i in range (1, mLen):
		total += vec3(models[i].Translation)
	avgTranslation = total/mLen
	return fbv(avgTranslation)

Here is an example of avgPos() in use:

Now onto the stabilization code:

#stabilizes face markers, input 4 FBModelList arrays, leaveOrig  for leaving original markers
def stab(right,left,center,markers,leaveOrig):
 
	pMatrix = FBMatrix()
	lSystem=FBSystem()
	lScene = lSystem.Scene
	newMarkers = []
 
	def faceOrient():
		lScene.Evaluate()
 
		Rpos = vec3(avgPos(right))
		Lpos = vec3(avgPos(left))
		Cpos = vec3(avgPos(center))
 
		#build the coordinate system of the head
		faceAttach.GetMatrix(pMatrix)
		xVec = (Cpos - Rpos)
		xVec = xVec.normalize()
		zVec = ((Cpos - vec3(faceAttach.Translation)).normalize()).cross(xVec)
		zVec = zVec.normalize()
		yVec = xVec.cross(zVec)
		yVec = yVec.normalize()
		facePos = (Rpos + Lpos)/2
 
		pMatrix[0] = xVec.x
		pMatrix[1] = xVec.y
		pMatrix[2] = xVec.z
 
		pMatrix[4] = yVec.x
		pMatrix[5] = yVec.y
		pMatrix[6] = yVec.z
 
		pMatrix[8] = zVec.x
		pMatrix[9] = zVec.y
		pMatrix[10] = zVec.z
 
		pMatrix[12] = facePos.x
		pMatrix[13] = facePos.y
		pMatrix[14] = facePos.z
 
		faceAttach.SetMatrix(pMatrix,FBModelTransformationMatrix.kModelTransformation,True)
		lScene.Evaluate()
 
	#keys the translation and rotation of an animNodeList
	def keyTransRot(animNodeList):
		for lNode in animNodeList:
			if (lNode.Name == 'Lcl Translation'):
				lNode.KeyCandidate()
			if (lNode.Name == 'Lcl Rotation'):
				lNode.KeyCandidate()
 
	Rpos = vec3(avgPos(right))
	Lpos = vec3(avgPos(left))
	Cpos = vec3(avgPos(center))
 
	#create a null that will visualize the head coordsys, then position and orient it
	faceAttach = FBModelNull("faceAttach")
	faceAttach.Show = True
	faceAttach.Translation = fbv((Rpos + Lpos)/2)
	faceOrient()
 
	#create new set of stabilized nulls, non-destructive, this should be tied to 'leaveOrig' later
	for obj in markers:
		new = FBModelNull(obj.Name + '_stab')
		newTran = vec3(obj.Translation)
		new.Translation = fbv(newTran)
		new.Show = True
		new.Size = 20
		new.Parent = faceAttach
		newMarkers.append(new)
 
	lPlayerControl = FBPlayerControl()
	lPlayerControl.GotoStart()
	FStart = int(lPlayerControl.ZoomWindowStart.GetFrame(True))
	FStop = int(lPlayerControl.ZoomWindowStop.GetFrame(True))
 
	animNodes = faceAttach.AnimationNode.Nodes
 
	for frame in range(FStart,FStop):
 
		#build proper head coordsys
		faceOrient()
 
		#update stabilized markers and key them
		for m in range (0,len(newMarkers)):
			markerAnimNodes = newMarkers[m].AnimationNode.Nodes
			newMarkers[m].SetVector(markers[m].Translation.Data)
			lScene.Evaluate()
			keyTransRot(markerAnimNodes)
 
		keyTransRot(animNodes)
 
		lPlayerControl.StepForward()

We feed our ‘stab function FBModelLists of right, left, and center stabilization markers, it creates virtual markers from these groups. Then ‘markers’ is all the markers to be stabilized. ‘leavrOrig’ is an option I usually add, this allows for non-destructive use, I have just made the fn leave original in this example, as I favor this, so this option does nothing, but you could add it. With the original markers left, you can immediately see if there was an error in your script. (new motion should match orig)

Creating an External UI that Uses ‘myLib’

Earlier I mentioned Creating Interactive MotionBuilder User Interface Tools, where I explain how to screenscrape/use the telnet Python Remote Server to create an interactive external UI that floats as a window in MotionBuilder itself. I also use the libraries mentioned in the above article.

The code for the facial stabilization UI I have created is here: [stab_ui.py]

I will now step through code snippets pertaining to our facial STAB tool:

def getSelection():
	selectedItems = []
	mbPipe("selectedModels = FBModelList()")
	mbPipe("FBGetSelectedModels(selectedModels,None,True)")
	for item in (mbPipe("for item in selectedModels: print item.Name")):
		selectedItems.append(item)
	return selectedItems

stab uiThis returns a list of strings that are the currently selected models in MBuilder. This is the main thing that our external UI does. The person needs to interactively choose the right, left, and center markers, then all the markers that will be stabilized.

At the left here you see what the UI looks like. To add some feedback to the buttons, you can make them change to reflect that the user has selected markers. We do so by changing the button text.

Example:

def rStabClick(self,event):
	self.rStabMarkers = getSelection()
	print str(self.rStabMarkers)
	self.rStab.Label = (str(len(self.rStabMarkers)) + " Right Markers")

This also stores all the markers the user has chosen into the variable ‘rStabMarkers‘. Once we have all the markers the user has chosen, we need to send them to ‘myLib‘ in MBuilder so that it can run our ‘stab‘ function on them. This will happen when they click ‘Stabilize Markerset‘.

def stabilizeClick(self,event):
	mbPipe('from euclid import *')
	mbPipe('from myLib import *')
	mbPipe('rStab = modelsFromStrings(' + str(self.rStabMarkers) + ')')
	mbPipe('lStab = modelsFromStrings(' + str(self.lStabMarkers) + ')')
	mbPipe('cStab = modelsFromStrings(' + str(self.cStabMarkers) + ')')
	mbPipe('markerset = modelsFromStrings(' + str(self.mSetMarkers) + ')')
	mbPipe('stab(rStab,lStab,cStab,markerset,False)')

Above we now use ‘modelsFromStrings‘ to feed ‘myLib’ the names of selected models. When you run this on thousands of frames, it will actually hang for up to a minute or two while it does all the processing. I discuss optimizations below. Here is a video of what you should have when stabilization is complete:


Kill the keyframes on the root (faceAttach) to remove head motion

Conclusion: Debugging/Optimization

Remember: Your stabilization will only be as good as your STAB markers. It really pays off to create tools to check marker stability.

Sometimes the terminal/screen scraping runs into issues. The mbPipe function can be padded out a lot and made more robust, this here was just an example. If you look at the external python console, you can see exactly what mbPipe is sending to MBuilder, and what it is receiving back through the terminal:

Sending&gt;&gt;&gt; selectedModels = FBModelList()
Sending&gt;&gt;&gt; FBGetSelectedModels(selectedModels,None,True)
Sending&gt;&gt;&gt; for item in selectedModels: print item.Name
['Subject 1-RH1', 'Subject 1-RTMPL']

All of the above can be padded out and optimized. For instance, you could try to do everything without a single lPlayerControl.StepForward() or lScene.Evaluate(), but this takes a lot of MotionBuilder/programming knowhow; it involves only using the keyframe data to generate your matrices, positions etc, and never querying a model.

posted by Chris at 10:10 PM  

Friday, June 20, 2008

360 Degree Streaming Video

This is a video from a company called Immersive Media. It’s a 360 degree streaming video you can pan around, and even zoom in. Awesome stuff, their hardware even does realtime stitching, and they have an underwater housing. Check out the site for more vids, they have been to some great locations.

posted by Chris at 12:01 PM  

Friday, June 20, 2008

A Functional MotionBuilder Python Console

I was talking to my friend Marco the other day.  As he is a real programmer, he is somewhat equipped with the needed skills required to decode MotionBuilder’s procedurally-generated Python documentation.  We were both frustrated, fighting with the ‘Python Console Tool’, when I showed him the telnet interface he was like “why don’t you just use that?”

And this is what I started doing. I now do much of my tests and work in the telnet console, because, unlike the built in console that Motion Builder offers, the telnet window at least offers copy/paste, and you can press the up arrow to cycle through previous arguments that you have entered. I would suggest using this until Autodesk adds usable features to their ‘Python Console Tool’.

Here’s an example:

posted by Chris at 1:08 AM  

Friday, June 20, 2008

Quickly Graphing Python Data in MotionBuilder

I have been researching quick ways to output MotionBuilder data visually, which I might post about later (doing some matplotlib tests here at home). The following is probably a ‘no-brainer’ to people with a programming background, but I found it interesting. Below I am using simple hashes to graph values visually in the console.

data = [20, 15, 10, 7, 5, 4, 3, 2, 1, 1, 0]
for i in data: print '#' * i

This will output something like so:

####################
###############
##########
#######
#####
####
###
##
#
#

Here’s a better example referencing some data names, and it’s output in the MB pyConsole:

for i in range(0,len(data)): print data1[i] + ' ' + ('#' * data[i])

python graph

posted by Chris at 12:36 AM  

Tuesday, June 17, 2008

RIP Stan Winston

One of my heroes passed away today. I never knew the guy but it made me very sad and hollow to hear he had passed. He was responsible for many of the creatures in films that made me eventually want to be a Technical Director.

posted by Chris at 12:00 AM  

Monday, June 16, 2008

High Speed Photography with the Casio EX-F1

At work we got the Casio EX-F1 for animation reference. It’s a really great, cheap solution for those looking to record high speed reference (300/600/1200fps) or hd (1080) video. Here are some videos I took a few weeks ago:

posted by Chris at 5:27 PM  

Sunday, June 15, 2008

RigPorn: Kung Fu Panda

Here are some screens of animation rigs from Kung Fu Panda:

In a shot:

posted by Chris at 2:20 AM  

Sunday, June 15, 2008

Building a J1 Remote Trigger for Vicon Datastations

Remote Trigger? Why Would I Want That?

Vicon Datastations allow you to string off a remote trigger which can allow you to start and stop of a motion capture take with a physical button. This could allow you to start/stop motion capture with sensors or anything else. In our case, we wanted to start/stop another device at the same exact time and have it sync’d with the mocap data, also, allow one person to run the device and the mocap session.

Disclaimer: I am aware that the remote interface is the same for the V8i/612/624/460/V6 Datastations, but I built this for the V8i, which looks like this:

This is what the ‘J1 REMOTE‘ port looks like on the back of your Datastation:

RTFM

Here is the description of the J1 Remote in the Vicon hardware manual:

Located directly below the camera inter face connectors, the J1 connector function is to allow the remote control of data capture from external switches or photoelectric sensors. Connecting Start (pin 3) or Stop (pin 5) to Ground (pin 7) will initiate the selected function.  Pin 1 generates a negative going TTL gated reference signal, which is aligned, to the camera Horizontal Synchronisation (HD) signal and present when data capture is being per formed.

The hardware manual will tell you that the J1 Remote Interface Connector is a Lemo Part (FGG.1B.307.CLAD52). So you will have to order this (follow the link). Below is the pin out from the manual, it’s pretty simple stuff:

Building the Trigger

Working with Relays

So, what we want to do is make a start and a stop button, or you could make an on/off switch. I made a button. The button flips a relay, which is like a switch. Below you see 5 pins, labeled ‘start‘, ‘stop‘, ‘grnd‘ and ‘coil‘. When you apply power to the coil, it will connect the grnd from stop to start and vice versa. Because it’s a magnet that flips the switch, nothing from the inner circuitry of the trigger can send any interference to the Vicon Datastation.

Below you see two relays, one triggers start/stop, the other triggers an LED. You can get relays that flip multiple poles at once. If you wanted to start/stop other devices with the same buttons you would add more relays, or use a multi-pole. In my example below I was sure to get relays and LEDs that work with a 9v battery, this way you do not need resistors or anything to alter the voltage.

The Altogether

This is what a final remote trigger can look like, green starts, red stops. The green LED can be on while capturing. The above relay will flip the light on/off based on button contact, even if red is pressed first, so you may want to go a different route if someone has butter fingers. The cord is durable microphone cord, as we only need 3 wires (start/stop/grnd, mic cable =  left/right/grnd).

Note: The J1 Remote Trigger works in Vicon Workstation, however, when Vicon updated it’s software to IQ, they did not want to spend the time to continue support of the remote trigger. IQ supports newer technology like the ‘MX Remote’ made by Vicon, which they would rather have you purchase. So yes, if you update your Vicon Software, certain features of your Vicon hardware will become useless.

posted by Chris at 1:50 AM  

Tuesday, June 10, 2008

Poor Man’s Mocap

This year my friend Judd gave a talk entitled Uncharted Animation: An In-depth Look at the Character Animation Workflow and Pipeline. In the talk, he showed what they call ‘poor man’s mocap’, where the animator can load up a sequence of frames and they are sync’d with the timeline in Maya. So as someone scrubs an animation, it scrubs the frames of the video. I have duplicated this in a small maxscript available in cryTools. You can grab it in the [Tutorials/Files] section.

posted by Chris at 2:24 PM  
« Previous PageNext Page »

Powered by WordPress