Stumbling Toward 'Awesomeness'

A Technical Art Blog

Thursday, April 12, 2018

Creating a Volume Slider in UE4 with Python

Working in cinematics and Sequencer, I often would like a global audio slider to lower audio levels while I am working on a sequence. I thought this might be a good simple tool example. This is completely bare bones, just a single slider in a dialog, for something more complex, you can check out scriptWrangler.

import unreal
from PySide import QtCore, QtGui, QtUiTools
 
class VolumeSlider(QtGui.QDialog):
 
    def __init__(self, parent=None):
        super(VolumeSlider, self).__init__(parent)
        self.setWindowTitle("Volume Slider")
        self.slider = QtGui.QSlider()
        self.slider.sliderMoved.connect(self.slider_fn)
        self.slider.setRange(0,100)
        self.slider.setValue(100)
 
        layout = QtGui.QVBoxLayout()
        layout.addWidget(self.slider)
        self.setLayout(layout)
 
        #load the master SoundClass
        self.master_sound_class = unreal.load_asset(None, name='/Engine/EngineSounds/Master')
 
    def slider_fn(self):
        volume_float = self.slider.value()/100.00
        #set the volume to the desired value
        self.master_sound_class.properties.set_editor_property(name='volume', value=volume_float)
 
APP = None
if not QtGui.QApplication.instance():
    APP = QtGui.QApplication(sys.argv)
 
main_window = VolumeSlider()
main_window.show()
posted by Chris at 1:05 AM  

Friday, April 6, 2018

Vector Math Examples in UE4 Python

One of the most visited/indexed posts on this site was the brief Maya vector math post, now that we are releasing Python in UE4, I thought I would port those examples to UE4 Python.

Creating Vectors

Let’s query the bounding box extents of a skeletal mesh actor. This will return two points in world space as vectors:

bbox = mesh.get_actor_bounds(False)
v1, v2 = bbox[0], bbox[1]

If we print the first vector, we see it’s a struct of type Vector

print v1
#<Struct 'Vector' (0x000001EACECEFE78) {x: 540.073303, y: 32.021194, z: 124.710869}>;

If you want the vector as a tuple or something to export to elsewhere, you just make a tuple of its components:

print (v1.x, v1.y, v1.z)
#(540.0733032226562, 32.02119445800781, 124.71086883544922)

If you want to create your own vector, you simply do:

my_vec3 = unreal.Vector(1,2,3)
Print my_vec3
#<Struct 'Vector' (0x000001EACECECA40) {x: 1.000000, y: 2.000000, z: 3.000000}>

Length / Distance / Magnitude

The Vector struct supports many mathematical operations, let’s say we want to get the distance from v1 to v2, we would subtract them, which returns a new vector, then we would get the length (‘size’ in this case) of the vector:

new_vec = v2-v1
Print new_vec.size()
#478.868011475

The vector struct has a lot of convenience functions built in (check out the docs here) , for instance, let’s get the distance from v1, to v2 (the diagonal across our bounding box), without doing the above by calling the dist() function:

print v1.dist(v2)
#478.868011475

There is also a distSquared function it you just want to quickly compare which are greater and not calc real distance.

USE CASE: FIND SMALL ACTORS

Using what we got, let’s play in the default scene:

We can iterate through the scene and find actors under a certain size/volume:

for actor in static_meshes:
    mesh_component = actor.get_component_by_class(unreal.StaticMeshComponent)
    v1, v2 = mesh_component.get_local_bounds()
    if v1.dist(v2) < 150:
        print actor.get_name()
#Statue

I am querying the bounds of the static mesh components because the get_bounds() on the actor class returns the unscaled bounds, you may notice that many things in the scene, including the statue and floor are scaled. This is reflected in the get_local_bounds() of the mesh component.

DOT PRODUCT / ANGLE BETWEEN TWO VECTORS

If we wanted to know the angle between two vectors, we would use the dot product of those, let’s create two new vectors v1, and v2, because our previous were not really vectors per se, but point locations Just like in UE4, you use the ‘|’ pipe to do a vector dot product.

v1 = unreal.Vector(0,0,1)
v2 = unreal.Vector(0,1,1)
v1.normalize()
v2.normalize()
print v1 | v2
print v1 * v2

Notice that asterisk will multiply the vectors, pipe will return the sum of the components, or the dot product / scalar product as a float.

For the angle between, let’s import python’s math library:

import math
dot = v1|v2
print math.acos(dot) #returns 0.785398180512
print math.acos(dot) * 180 / math.pi
#returns 45.0000009806

Above I used the Python math library, but there’s also an Unreal math library available unreal.MathLibrary, you should check that out, it has vector math functions that don’t exist in the vector class:

dot = v1|v2
print unreal.MathLibrary.acos(dot) #returns 0.785398180512
print unreal.MathLibrary.acos(dot) * 180 / math.pi #returns 45.0000009806
print unreal.MathLibrary.radians_to_degrees(unreal.MathLibrary.acos(dot)) #returns: 45.0

USE CASE: CHECK ACTOR COLINEARITY

Let’s use the dot product to check if chairs are colinear, or facing the same direction. Let’s dupe one chair and call it ‘Chair_dupe’:

fwd_vectors = {}
for actor in static_meshes:
    if 'Chair' in  actor.get_name():
        actor_vec = actor.get_actor_forward_vector()
        for stored_actor in fwd_vectors:
            if actor_vec | fwd_vectors[stored_actor] == 1.0:
                print actor.get_name(), 'is colinear to', stored_actor
        fwd_vectors[actor.get_name()] = actor_vec
#returns: Chair_dupe is colinear to Chair

I hope this was helpful, I can post some other examples later.

posted by Chris at 1:23 AM  

Wednesday, July 5, 2017

Skin Weights Savior

Lots of people were interested in Deformer Weights and ways to save/load skin weights faster. Trowbridge pointed out that the API now allows for setting in one go, no loops needed, like the C++ call. Aaron Carlisle on our team here at Epic had noticed the same thing. Aaron took the time to write up a post going over it here:

Using GetWeights and SetWeights in the Maya Python API

Also, it looks like you can get/set blind data in one go now… 😮

posted by Chris at 11:20 PM  

Monday, November 21, 2016

The Eyes

ct_eyes

Windows to the Soul

In CG, the eyes are unforgiving.  For thousands of generations we have had to decipher true intent of other human beings from their eyes, because of this, we have evolved to notice the most minute, 1mm shift in eye shape. If there was one part of a digital character that was one of the most difficult to simulate and render, it would be this. Let’s talk about it.

Talking About the Eyes

anat

First, I am going to go over some eye terminology. There’s a lot out there, but I am only going to go over what is required to give meaningful feedback in dailies. :D  You don’t need to be an ophthalmologist to discuss why a character’s eyes don’t look right.

The Iris – This is the round, colored circle in the eye, it encompasses the pupil, but it is not the pupil.
The Pupil – The black dot, or the hole in the iris.
The Sclera – The ‘whites of your eyes’
The Cornea – This is the bulge over the iris

Vergence – The eyes converge or turn inward to aim at the object a person is gazing at. They diverge, or turn outward when tracking something that is receding, if they diverge more than parallel, you can say the person is ‘walleyed’ (see: strabismus).

KEY TAKEAWAY – When viewing and reviewing eyes, it’s important to notice how the lids break across the iris and the shape of the negative space the lids and the iris create in the sclera. When we look at someone we are mainly seeing this shape. For more advanced readers, I picked the image above because you can really see the characteristics of the wetness meniscus, and eyelash refraction, but we’ll talk about that later.

Eye Placement

Initial eye placement is very important. Eyes too large or placed improperly will not rotate accurately when set in the face. There’s a lot you can learn by just looking at yourself in a mirror, looking at a friend, checking or scouring youtube. There’s even more you can learn from reading forensic facial reconstruction textbooks!

There are a few books for forensic artists, and most have information about placing eyes for facial re-construction. In my previous post about the jaw, I talked about forensic facial reconstruction a bit, these are my go-to books when it comes to eye (and teeth) placement:

books
Face It: A Visual Reference for Multi-ethnic Facial Modeling
Forensic Art and Illustration
Forensic Analysis of the Skull
Facial Geometry

I use a Mary Kay Travel Mirror and have bought them for all riggers/animators on my teams, at six bucks you can’t go wrong.

If you were to draw a line from the upper and lower orbits of the eye socket, it would be in line with the back of the cornea. I don’t usually like to point to skeletal reference, but in this case the orbits are relatively bony parts of the face that can be seen in surface anatomy.

placement01

Here are some good anatomical images for centering the eye in the socket (click to enlarge):

tmp696744_thumb    tg-7-57c-modified    f2

And in practice; here’s eye placement from Marius, the hero Character in Ryse the video game. Notice that the eyeball doesn’t even cover the entire ocular opening when viewed front on in wireframe, as the with anatomical images above

Marius, Ryse 2011, Crytek

Abdenour Bachir, Ryse 2011, Crytek (click to enlarge)

eye_blog

Lastly, here is a GIF I made from a video tutorial called ‘How to Paint the Human Eye‘ by Cat Reyto. It shows eye placement rather well.

So this has all been discussing eye placement in the socket, but what about eye socket placement in the face?  This is where proportions come into play. The books above, “A Visual Reference to Multi-Ethnic Facial Modeling” and “Facial Geometry” are very good at discussing facial proportions and showing you how those proportions can change based on ethnicity. Here is a page from “Facial Geometry” that shows the basic facial proportions, most importantly pupil and eye placement relative to the rest of the face:

face_prop

Here’s a page from the other book, which actually discusses 3D modeling. This is the chapter on eye placement, anyone interested in facial modeling should pick this up, it’s a great full color reference:

book_multiejpg

In my post about the jaw, I discussed jaw placement relative to the eyes and pupils, you can do the inverse: check the eye placement relative to teeth that you feel happy with. Notice that there’s a correlation between the molars or width of the upper teeth and the pupils.

face05face01

Sometimes art direction would like ‘larger’ eyes, this is sometimes attempted by making the eyeball larger, but then it can feel weird when the eye is too large for the socket. This causes issues with the rotations of the eye. For reference, take a look at people who have been handed down a specific neanderthal gene for large eyes (or Axenfeld-Rieger syndrome), like the Ukrainian model Masha Tyelna. She has large eyeballs, but also the facial physiology to accept them:

eyes_Masha_Tyelna4000000110309-masha_tyelna-fiteyes_Masha_Tyelna3

Eye Movement

eye_rot

Range of Movement

Making digital humans, often from fiction, I find that these numbers are all relative, but the book Three Dimensional Rotations of the Eye, has a some good information, as does this chapter of another book: Physiology of the Ocular Movements. From default pose I usually fond that each eye can rotate on the horizontal plane 40 degrees in each direction (right and left) and 30 degrees in each direction for the vertical plane (up and down).

Eye ‘Accommodation’ and Convergence
For the purpose of rigging, the eyes are at maximum divergence (or parallel) at about 1.5m or 6 feet. I have not come across the eye gaze distance that results in maximum divergence, if you happen to have that information; let me know!

Pupils dilate to focus on a near object, this is known as accommodation. A standard young person’s eye can gaze/focus on objects from infinity to 6.5cm from the eyes. In action, eyes cannot converge/diverge faster than about 25 degrees a second.

“Why Does My Character Look Cross-Eyed”?

So, take a look at the MRI at the top of this page. Do you notice that the eyes are not looking forward or parallel? It’s because the eyes aren’t parallel when looking parallel/at an infinite distance. Human eyes bow out a few degrees, the amount of degrees varies between 4 and 6, per eye. The amount ‘off’ an eyeball is bowed outward is called the ‘kappa angle’, check this diagram below:

getimage

Let’s go over some more terminology:

Pupillary Axis – A line drawn straight out the pupil.
Visual Axis – A line from the fovea to the fixation target or the item being gazed upon.
Angle Kappa – the angle between the pupillary and visual axis.

The Hirschberg Test

The way ophthalmologists determine if eyes are converging properly is with the Hirschberg test. It’s a simple test where they ask a child to look at a teddy bear, and they shine a pen light into their eye, they look at his this light source reflects off the cornea, and they can tell if the child has a lazy eye/issue converging his eyes on a point.

hberg

middleeastafrjophthalmol_2015_22_3_265_159691_u6Doing a ‘CG Hirschberg’ test requires a renderer with accurate reflections. You place a point light directly behind the camera and have the rig fixate/gaze directly onto the camera.

For more information on Angle Kappa, as well as data on the average angle across groups of people, check out Pablo Artal’s blog posts.

KEY TAKEAWAY

Even though the eyes bow out a little bit, for all intents and purposes, as you see in the Hirschberg, the pupil still seems like it is looking at the person. So yes, the pupillary axis is off by a few degrees, but with the refraction it’s not too noticeable (<1mm reflection offset from the center of the cornea). An Angle Kappa offset should be built into your rigs, you should not have the pupillary axis converging on the fixation target, this is what causes CG characters to look cross-eyed when viewed with scrutiny.

“Why Don’t My Eyes Feel Real?”

untitled-8

Hanno Hagedorn, Crysis 2005, Crytek

A difficult issue with eyes in computer graphics is making them feel like they are really set in the face.

The eye has a very interesting soft transition where the sclera meets the lids.  Some of this is ambient occlusion and shadows from the brow and lids, and some of it is from reflection of the upper eyelashes.

In 2005, Hanno Hagedorn and I were working on Crysis, we were having an issue with eyes that I called ‘game eye’. It’s where the sclera, has a sharp contrast with the skin.  In 2005, there was no obvious solution to do this in realtime.

We (I still credit Hanno) solved this in a very pragmatic way, the following is from one of our slides at GDC 2007:

overlay1

We created an ‘eye overlay’, a thin film that sat on the eye and deformed with the fleshy eye deformation. A lot of games today, including Paragon, my current project at Epic, use this technique. Many years later, a famous VFX company actually tried to patent the technique.

Here are those same meshes, six years later on Ryse: Son of Rome:

eye_meshes

Wetness Meniscus

One thing added since Crysis is the tearline or wetness meniscus. This is very important, it’s purpose of this is to kick up small spec highlights and fake the area where the lid meets the sclera. You can see this line in photo of the female eye where I have outlined initial anatomical terms.

overlay_wetness_lashes

Eye Rendering

All of these eye parts are not easy to shade correctly, Nicolas Schulz describes how Crytek implemented a forward pass in their deferred renderer to deal with eyes in his 2014 paper The Rendering Technology of Ryse. Nicholas has two slides dedicated to eye shading:

eye_rendering

The eye shader used on Ryse is documented in depth here [docs.cryengine.com], the modified HLSL CFX shader code is here [github].   Some important features of the shader include:

  • Cornea refraction and scattering – this is important, it simulates the refraction of the liquid in the cornea
  • Iris color, depth, self shadowing, and SSS – Since there is no physical iris, we need to fake that there is a physical form there using a displacement map (POM)
  • Eye occlusion overlay depth bias – this is very important, it allows you to push/pull the depth of the overlay film, so that the eye doesn’t penetrate through it
  • Sclera SSS – not to be overlooked, or else the eye looks like a golfball.

eye_ball_textures

These were the images that fed into the shader features above, and below are the overlay spec and AO masks, and the spec texture for the wetness/tearline.

eye_occlusioneye_water_spec

This is an example of the iris displacement map ingame seen with a debug render view (Xbox One, 2011)

Abdenour Bachir, Ryse, Crytek 2011

Does the eye really need a corneal bulge?

On Ryse and Crysis, we had spherical eyes, and faked the corneal bulge with a shader and fleshy eye deformation. A corneal bulge means that you really have to have your shit together when it comes to the eye deformation. All those layers mentioned above have to deform with the bulge and it can be a lot to manage across all the deformation contexts of the eye (blink directionals, squint directionals, etc) On Paragon, we have non-spherical eyes, and it’s very challenging to deform the eyes properly in the context of a 60hz e-sports title with only one joint per lid. It’s actually pretty much impossible.

Altogether

Below, you see the eyes in Ryse: Son of Rome, they feel set in the face, and consistent with the world and hyper-real style of the game. (in-game mesh and rig, click to enlarge)

Abdenour Bachir, Ryse 2012, Crytek

Abdenour Bachir, Ryse 2012, Crytek

Scanning/Acquiring Eyes

capture

When scanning a character’s head it’s important to get their eyes fixed at a gaze distance you have recorded. I also use the scan to see how the iris breaks across the lid and infer information about how the eye will be set in the face and skull.

  • If you want to take your own eye texture reference, shoot through a ring or attach a light co-axial with the camera lens, try to get the highlight in the pupil as you will discard this part of the image anyway.
  • Because the eye is shiny and it’s characteristics change as you move around it, photogrammetry often falls flat.
  • Because the eye is a complex translucent lens, cutting spec with cross-polarization often also falls flat. Light loses it’s polarization when bouncing around in there.

When it comes to scanning the eyes themselves, Disney Research has published an interesting paper on the High Quality Capture of Eyes.

How You Review Eyes

First off you should check eyeball depth and placement, as shown above.

  • You should be reviewing eyes with at least the FOV of a portrait lens: 80mm or a 25 degree FOV. When getting ‘all up in there’ I often use a 10 degree FOV.
  • Try to review them at the distance you will see them in your shipping product.
  • If you are embodying the person that is the fixation point of the digital human, you really need runtime look IK (for the eyes, but preferably feathered torso>head>neck>eyes). From some distance, you can tell is someone is looking at your eyes or your ear. Think about that. The slightest anim compression or issue in any joint from root to eyes can cause the gaze to be off a few degrees and that’s all it takes.
  • If you’re doing a lot of work, build a debug view into your software that draws the pupillary axis and visual axis, all the way to the fixation/gaze point
posted by Chris at 2:01 AM  

Monday, September 26, 2016

Save/Load SkinWeights 125x Faster

DISCLAIMER/WARNING: Trying to implement this in production I have found other items on top of the massive list that do not work. The ignore names flag ‘ig’ causes a hard crash on file load, the weight precision flag ‘wp’ isn’t implemented though it’s documented, the weight tolerance flag ‘wt’ causes files to hang indefinitely on load. When it does load weights properly, it often does so in a way that crashes the Paint Skin Weights tool. I have reported this in the Maya Beta forums.

Previously I discussed the promise in Maya command ‘deformerWeights‘. The tool that ships with Maya was not very useful, but the code it called was 125 times faster than python if you used it correctly..

Let’s make a python class that can save and load skin weights. You hand it a few hundred skinned meshes (avg Paragon character) and it saves the weights and then you delete history on the meshes, and it loads the weights back on.  What I just described is the process riggers go through when updating a rig or a mesh _every day_.

Below we begin the class, we import a python module to parse XML, and we say “if the user passed in a path, let’s parse it.”

#we import an xml parser that ships with python
import xml.etree.ElementTree
 
#this will be our class, which can take the path to a file on disk
class SkinDeformerWeights(object):
    def __init__(self, path=None):
        self.path = path
 
        if self.path:
            self.parseFile(self.path)

Next, let’s make this parseFile function. Why is parsing the file important? In the last post we found out that there’s a bug that doesn’t appropriately apply saved weights unless you have a skinCluster with the *exact* same joints as were exported. We’re going to read the file and make a skinCluster that works.

#the function takes a path to the file we want to parse
def parseFile(self, path):
    root = xml.etree.ElementTree.parse(path).getroot()
 
    #set the header info
    for atype in root.findall('headerInfo'):
        self.fileName = atype.get('fileName')
 
    for atype in root.findall('weights'):
        jnt = atype.get('source')
        shape = atype.get('shape')
        clusterName = atype.get('deformer')

Now we’re getting some data here, we know that the format can save deformers for multiple shapes, let’s make a shape class and store these.

class SkinnedShape(object):
    def __init__(self, joints=None, shape=None, skin=None, verts=None):
        self.joints = joints
        self.shape = shape
        self.skin = skin
        self.verts = verts

Let’s use that when we parse the file, let’s then store the data we paresed in our new shape class:

#the function takes a path to the file we want to parse
def parseFile(self, path):
    root = xml.etree.ElementTree.parse(path).getroot()
 
    #set the header info
    for atype in root.findall('headerInfo'):
        self.fileName = atype.get('fileName')
 
    for atype in root.findall('weights'):
        jnt = atype.get('source')
        shape = atype.get('shape')
        clusterName = atype.get('deformer')
 
        if shape not in self.shapes.keys():
            self.shapes[shape] = self.skinnedShape(shape=shape, skin=clusterName, joints=[jnt])
        else:
            s = self.shapes[shape]
            s.joints.append(jnt)

So now we have a dictionary of our shape classes, and each knows the shape, cluster name, and all influences. This is important because, if you read the previous post, the weights will only load onto a skinCluster with the exact same number and names of joints.
Now we write a method to apply the weight info we parsed:

def applyWeightInfo(self):
    for shape in self.shapes:
        #make a skincluster using the joints
        if cmds.objExists(shape):
            ss = self.shapes[shape]
            skinList = ss.joints
            skinList.append(shape)
            cmds.select(cl=1)
            cmds.select(skinList)
            cluster = cmds.skinCluster(name=ss.skin, tsb=1)
            fname = self.path.split('\\')[-1]
            dir = self.path.replace(fname,'')
            cmds.deformerWeights(fname , path = dir, deformer=ss.skin, im=1)

And there you go. Let’s also write a method to export/save the skinWeights from a list of meshes so we never have to use the Export DeformerWeights tool:

def saveWeightInfo(self, fpath, meshes, all=True):
    t1 = time.time()
 
    #get skin clusters
    meshDict = {}
    for mesh in meshes:
        sc = mel.eval('findRelatedSkinCluster '+mesh)
        #not using shape atm, mesh instead
        msh =  cmds.listRelatives(mesh, shapes=1)
        if sc != '':
            meshDict[sc] = mesh
        else:
            cmds.warning('>>>saveWeightInfo: ' + mesh + ' is not connected to a skinCluster!')
    fname = fpath.split('\\')[-1]
    dir = fpath.replace(fname,'')
 
    for skin in meshDict:
        cmds.deformerWeights(meshDict[skin] + '.skinWeights', path=dir, ex=1, deformer=skin)
 
    elapsed = time.time()-t1
    print 'Exported skinWeights for', len(meshes), 'meshes in', elapsed, 'seconds.'

You give this a folder and it’ll dump one file per skinCluster into that folder.
Here is the final class we’ve created [deformerWeights.py], and let’s give it a test run.

sdw = skinDeformerWeights()
sdw.saveWeightInfo('e:\\gadget\\', cmds.ls(sl=1))
>>>Exported skinWeights for 214 meshes in 2.433 seconds.

Let’s now load them back, we will iterate through the files in the directory and parse each, applying the weights:

import os
t1=time.time()
path = "e:\\gadget\\"
files = 0
for file in os.listdir(path):
    if file.endswith(".skinWeights"):
        fpath = path + file
        sdw = skinDeformerWeights(path=fpath)
        sdw.applyWeightInfo()
        files += 1
elapsed = time.time() - t1
print 'Loaded skinWeights for', files, 'meshes in', elapsed, 'seconds.'
>>> Loaded skinWeights for 214 meshes in 8.432 seconds.

“>>> Loaded skinWeights for 214 meshes in 8.432 seconds.”

So that’s a simple 50 line wrapper to save and load skinWeights using the deformerWeights command. No longer do we need to write C++ API plugins to save/load weights quickly.

posted by Chris at 1:27 AM  

Saturday, September 24, 2016

DeformerWeights Command, Cloaked Savior?

export-skin

This post was originally going to be entitled ‘Why Everyone Writes their Own Skin Exporter’. Maya’s smooth skin weight export tool hasn’t changed since Maya 4.0 when it was introduced 15 years ago. It saves out greyscale weightmaps in UV space, one image per joint influence per mesh. The only update they have done in 15 years is change the slider to go from a max of 1024 pixels to 4096, and now 8192!

Autodesk understands the need and importance of a skin weight exporter, they even ship it as a C++ example in their Maya Developer Kit / SDK. So why are they still shipping this abomination above?

got-this

Like blind data discussed before, in pythonland we cannot set skinweights in one go, we must iterate through the entire mesh setting skin weights one vertex at a time. This means a Maya feature or C++ plugin can save and load skin weights in seconds that take python 15 minutes.

I have written lots of skin weight save/load crap in my time, and as I sat down and started to do that very thing in an instructional blog post one night, we had an interesting discussion in the office the next day. ADSK added ‘Export Deformer Weights‘ in 2011, but it has never really worked. I don’t know a single person who uses it. But it does save and load ‘deformer’ weights via the C++ API –so there’s real promise here!

 

44521502

Save/load skin weights fast without writing a custom Maya plugin? This is kinda like the holy grail, which is ridiculous, but you should see the lengths people go to eek out a little more performance! My personal favorite was MacaroniKazoo back in 2010 reaching in and setting the skinCluster weight list directly by hand using an unholy conglomeration of python API and MEL commands. Tyler Thornock has a post that builds on this here.

So I asked the guys if anyone had looked at Export Deformer Weights recently, everyone either hadn’t heard of it, heard it was shit, or had some real first-hand experience of it being “A bit shit.” But still –the promise was there!

Export Deformer Weights: Broken and Backwards

So, first thing’s first, I made an awesome test case. I am going to go over all the gotchas and things that are broken, but if you don’t want to take this voyage of discovery, skip this section.

skin01

I open the deformer weight export tool, and just wow.. I mean the UI team really likes it’s space:

export-def1

I save my skin weights to an XML file, delete history on the mesh, and open the import UI:

export-def2

Gotcha 1) It requires a deformer to load weights onto. You need to re-skin your mesh.

I re-skinned my mesh and loaded the XML using Index.. drumroll..

skin02

Well, this is definitely not applying the weights back by vertex index. I decided to try ‘Nearest’:

skin03

Gotcha 2) Of the options [Index, Nearest, Over], ‘Index’ is somehow lossy, and anything other than ‘Index’ seems to crash often, ‘Nearest’ seems totally borked. (above)

So this was when I just began to think this was a complete waste of my time. I was pretty annoyed that they even shipped a tool like this, something that is so needed and so important, yet crashes frequently and completely trashes your data when it does work.

this-is-fine

Not Taking ‘Broken and Backwards’ for an Answer

I am already invested and, not understanding how loading weights by point index could be lossy and broken, I decided to look at the XML file. The tool writes out one XML file per skinCluster, here’s a rundown of the file format:

Mesh Info (Shape) – the vertices of the shape are stored local space x,y,z and corresponding index

<?xml version="1.0"?>
<deformerWeight>
  <headerInfo fileName="C:/Users/chris.evans/Desktop/test_sphere.xml" worldMatrix="1.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000 1.000000 "/>
  <shape name="pSphereShape1" group="7" stride="3" size="382" max="382">
    <point index="0" value=" 0.148778 -0.987688 -0.048341"/>
    <point index="1" value=" 0.126558 -0.987688 -0.091950"/>
    <point index="2" value=" 0.091950 -0.987688 -0.126558"/>
    ...

Joints (Weights) – There is one block per joint that calls out each vertex that it have influences for on the shape

  <weights deformer="skinCluster1" source="root" shape="pSphereShape1" layer="0" defaultValue="0.000" size="201" max="380">
    <point index="0" value="0.503"/>
    <point index="1" value="0.503"/>
    <point index="2" value="0.503"/>
    ...

And that’s it, not a lot of data, nothing about the skinCluster attributes or options, no support for spaces like UV or world. (odd, since it’s had UV support for 15 years)

Next I decided to run the tool again and see what command it was calling, I then looked up the command documentation and here’s where it gets interesting, go ahead, take a look!

def_bary

So now I am hooked, someone is putting some thought into this –at least on some level.

image

I don’t at all understand why the UI has none of these options, but I need to get this working. If you read through the docs, the command also supports:

  • Exporting multiple skinClusters/shapes/deformers per XML file
  • Exporting skinCluster/deformer attributes like ‘skinningMethod’ and ‘envelope’
  • Local and world space positions with a positional tolerance

“Someone is putting some thought into this”

So I started trying to figure out why a file format that explicitly knows every influence of every vertex by index and inf name, doesn’t load weights properly. After some trials I hit gotcha #3:

Gotcha 3) Of the options the weights only load properly if the skinCluster has the *exact* same influences it was saved with. Which really makes no sense, because the file format has the name of every joint in the old skinCluster.

So now I had it working, time to wrap it and make it useful.

The Documentation is a Lie.

So, first thing’s first, I did a speed test.

Importing with deformerWeights was about 125 times faster: Gold mine.

But I just couldn’t get some of the flags to work, I thought I was just a moron, until I finally tried the code example in the ADSK Maya documentation, which FAILS. Let’s first look at the -vc flag, which is required to load using ‘bilinear’ or ‘barycentric’ mapping/extrapolation:

cmds.deformerWeights ("testWeights.xml", ex=True, vc=True, deformer="skinCluster1")
 
# Error: Invalid flag 'vc'
# Traceback (most recent call last):
#   File "<maya console>", line 1, in <module>
# TypeError: Invalid flag 'vc' #

Gotcha 4) The python examples do not work. -vertexConnections flag doesn’t work, -attribute flag doesn’t work, so no saving skincluster metadata like ‘skinningMethod’, etc. Because of that, ‘barycentric’ and other methods that need vertex connection info do not work. The ‘deformer’ flag shows that it takes a list of deformers and writes them all to one file, but this is not true, it takes a single string name of a deformer.

I now know why the UI doesn’t have all these cool options! –they don’t work!

Gotcha 5) It doesn’t take a file path, to save a file to a path you need to specify the filename, and then the path separate.

cmds.deformerWeights ("testWeights.xml", path='d:\\myWeight\\export\\folder\\', ex=True, deformer="skinCluster1")

Perhaps someone fixed this stuff, documented it, and then reverted the fix, but this has been around since 2011.. I tried the above in maya 2016 latest service pack and all my links above are to that version of the documentation.

I wasn’t really intending to write this much, so now that we know this can import weights 125 times faster, we’ll make a tool to utilize it. Stay tuned!

posted by Chris at 2:15 AM  

Wednesday, August 31, 2016

Calibrating the Alienware 13 R2 OLED Laptop

Last month Dell had the Black Friday in July sale and this beauty was on sale for 500 dollars off, plus 10% if you ordered by phone. I decided it was time to replace my beloved Lenovo x220t.

The Alienware 13 might be ugly and lack a Wacom digitizer, but it does have an nVidia GTX 965M and an OLED screen with 211% sRGB coverage! As the Lenovo Yoga X1 only has integrated graphics, I think the Alienware is the machine for 3D artists.

If you’re a gamer who landed here because you wondered how to get the most out of your amazing display, or wondered why everything is super-red-pink, it’s time to put your big boy pants on! Calibrating the monitor wasn’t so straight forward, but let’s jump into it.

argyll

We are going to use an open source Color Management toolkit called ArgyllCMS [Download it here]. It can use many different hardware calibration devices, I have used it with the xRite Huey and the Spyder5.

One thing that’s important to know is that all the sensors are the same, you only pay for software features. If you don’t own a calibrator, you can buy the cheapest Spyder, because it’s the same sensor and you are using this software, not the OEM software.

displayCAL

Next we’re going to use a GUI front end built to make ArgyllCMS more user friendly. It’s called DisplayCAL, but it requires a lot of libs (numPy, wxWidgets, etc) so I recommend downloading this zero install that has everything.

displaycalui

Be sure to set the ‘White level drift compensation’ on. You will need to ignore the RGB adjustment it first asks you to fuss with because there is no RGB adjustment on your monitor.

When you are through, you will see the following (or something like it!):

spyder
SRGB

Note: DisplayCAL can also output a 3d LUT for madVR, which works with any number of video playback apps. Check here to force your browser to use your color management profile. If it’s useful, I can make another post detailing all the different settings and color-managed apps to make use of your monitor.

I hope you found this useful, it should apply to the Lenovo Yoga X1 and any other OLED laptops in the coming months.

posted by Chris at 9:23 PM  

Saturday, May 7, 2016

Simple Runtime Rigging in UE4

This is something that I found when cleaning my old computer out at work this week after an upgrade. This is an example of using a look at controller and a blendspace to drive a fleshy eye deformation in UE4 live at runtime.

This is the technique Jeremy Ernst used on Smaug’s eye in the demo we did with Weta over a year ago. Create a 2d blendspace for the eye poses. Export out left, right, up, down poses and connect them:

eye2

Grab the local transform from the eye, and set two vars with it:

image13

Create a Look At Controller and integrate the blendspace:

image06

posted by Chris at 7:40 AM  

Friday, October 17, 2014

Embedding Icons and Images In Python with XPM

xpm1

As technically-inclined artists, we often like to create polished UIs, but we have to balance this with not wanting to complicate the user experience (fewer files the better). I tend to not use too many icons with my tools, and my Maya tools often just steal existing icons from Maya: because I know they will be there.

However, you can embed bitmap icons into your PySide/PyQt apps by using the XPM file format, and storing it as a string array. Often I will just place images at the bottom of the file, this keeps icons inside your actual tool, and you don’t need to distribute multiple files or link to external resources.

Here’s an example XPM file:

/* XPM */
static char *heart_xpm[] = {
/* width height num_colors chars_per_pixel */
"7 6 2 1",
/* colors */
"` c None",
". c #e2385a",
/* pixels */
"`..`..`",
".......",
".......",
"`.....`",
"``...``",
"```.```"
};

This is a small heart, you can actually see it, in the header you se the ‘.’ maps to pink, you can see the ‘.’ pattern of a heart. The XPM format is like C, the comments tell you what each block does.
Here’s an example in PySide that generates the above button with heart icon:

import sys
from PySide import QtGui, QtCore
 
def main():
    app = QtGui.QApplication(sys.argv)
    heartXPM = ['7 6 2 1','N c None','. c #e2385a','N..N..N',\
    '.......','.......','N.....N','NN...NN','NNN.NNN']
    w = QtGui.QWidget()
    w.setWindowTitle('XPM Test')
    w.button = QtGui.QPushButton('XPM Bitmaps', w)
    w.button.setIcon(QtGui.QIcon(QtGui.QPixmap(heartXPM)))
    w.button.setIconSize(QtCore.QSize(24,24))
    w.show()
 
    sys.exit(app.exec_())
 
if __name__ == '__main__':
    main()

You need to feed QT a string array, and strip everything out. Gimp can save XPM, but you can also load an image into xnView and save as XPM.

Addendum: Robert pointed out in the comments that pyrcc4, a tool that ships with PyQt4, can compile .qrc files into python files that can be imported. I haven’t tried, but if they can be imported, and are .py files, I guess they can be embedded as well. He also mentioned base64 encoding bitmap images into strings and parsing them. Both these solutions could really make your files much larger than XPM though.

posted by Chris at 3:23 PM  

Thursday, October 16, 2014

Tracing and Visualizing Driven-Key Relationships

sdk

Before I get into the collosal mess that is setting driven keys in Maya, let me start off by saying, when I first made an ‘SDK’ in college, back in 1999, never did I think I would still be rigging like this 15 years later. (Nor did I know that I was setting a ‘driven key’, or a ‘DK’ which sounds less glamorous)

How The Mess is Made

simple

Grab this sample scene [driven_test]. In the file, a single locator with two attributes drives the translation and rotation of two other locators. Can’t get much ismpler than that, but look at this spaghetti mess above! This simple driven relationship created 24 curves, 12 blendWeighted nodes, and 18 unitConversion nodes. So let’s start to take apart this mess. When you set a driven-key relationship, it uses an input and a curve to drive another attribute:

curves2

When you have multiple attributes driving a node, maya creates ‘blendWeighted’ nodes, this blends the driven inputs to one scalar output, as you see with the translateX below:

curves

Blending scalars is fairly straight forward, but for rotations, get ready for this craziness: A blendWeighted node cannot take the output of an animCurveUA (angle output), the value must first be converted to radians, then blended. But the final node cannot take radians, so the result must be converted back to an euler angle. This happens for each channel.

craziness

If you think this is retarded; welcome to the club. It is a very cool usage of general purpose nodes in Maya, but you wouldn’t think so much of rigging was built on that, would you? That when you create a driven-key it basically makes a bunch of nodes and doesn’t track an actual relationship, because of this, you can’t even reload a relationship into the SDK tool later to edit! (unless you traverse the spaghetti or takes notes or something)

I am in love with Node Edtor, but by default hypergraph did hide some of the spaghetti, it used to hide unitConversions as ‘auxiliary nodes’:

auxnode

Node Editor shows unitConversions regardless of whether aux nodes are enabled, I would rather see too much and know what’s going on than have things hidden, but maybe that’s just me. You can actually define what nodes are considered aux nodes and whether ‘editors’ show them, but I am way off on a tangent here.

So just go in there and delete the unit conversion yourself and wire the euler angle output, which you would think is a float.. into the blendWeighted input, which takes floats. Maya won’t let you, it creates the unitConversion for you because animCurveUA outputs angles, not floats.

This is why our very simple example file generated over 50 nodes. On Ryse, Maruis’ face has about 73,000 nodes of driven-key spaghetti. (47,382 curves,  1,420 blendWeighted, 24,074 unitConversion)

Finding and traversing

So how do we find this stuff and maybe query and deal with it? Does Maya have some built in way of querying a driven relationship? No. Not that I have ever found. You must become one with the spaghetti! setDrivenKeyframe has a query mode, but that only tells you what ‘driver’ or ‘driven’ nodes/attrs are in the SDK window if it’s open, they don’t query driver or driven relationships!

We know that these intermediate nodes that store the keys are curves, but they’re a special kind of curve, one that doesn’t take time as an input. Here’s how to search for those:

print len(cmds.ls(type=("animCurveUL","animCurveUU","animCurveUA","animCurveUT")))

So what are these nodes? I mentioned above that animCurveUA puts out an angle:

  • animCurveUU – curve that takes a double precision float and has a double output
  • animCurveUA – takes a double and outputs an angle
  • animCurveUL – takes a double and outputs a distance (length)
  • animCurveUT – takes a double and outputs a time

When working with lots of driven-key relationships you frequently want to know what is driving what, and this can be very difficult because of all the intermediate node-spaghetti. Here’s what you often want to know:

  • What attribute is driving what – for instance, select all nodes an attr drives, so that you can add them back to the SDK tool. ugh.
  • What is being driven by an attribute

I wrote a small script/hack to query these relationships, you can grab it here [drivenKeyVisualizer]. Seriously, this is just a UI front end to a 100 line script snippet, don’t let the awesomeness of QT fool you.

dkv1

The way I decided to go about it was:

  1. Find the driven-key curves
  2. Create a tiny curve class to store little ‘sdk’ objects
  3. List incoming connections (listConnections) to get the driving attr
  4. Get the future DG history as a list and reverse it (listHistory(future=1).reverse())
  5. Walk the reversed history until I hit a unitConversion or blendWeighted node
  6. Get it’s outgoing connection (listConnections) to find the plug that it’s driving
  7. Store all this as my sdk objects
  8. Loop across all my objects and generate the QTreeWidget

Here’s how I traversed that future history (in the file above):

 #search down the dag for all future nodes
 futureNodes = [node for node in cmds.listHistory(c, future=1, ac=1)]
 #reverse the list order so that you get farthest first
 futureNodes.reverse()
 drivenAttr = None
 
 #walk the list until you find either a unitConversion, blendWeighted, or nothing
 for node in futureNodes:
     if cmds.nodeType(node) == 'unitConversion':
         try:
             drivenAttr = cmds.listConnections(node + '.output', p=1)[0]
             break
         except:
             cmds.warning('blendWeighted node with no output: ' + node)
             break
     elif cmds.nodeType(node) == 'blendWeighted':
         try:
             drivenAttr = cmds.listConnections(node + '.output', p=1)[0]
             break
         except:
             cmds.warning('blendWeighted node with no output: ' + node)
             break
 if not drivenAttr:
     drivenAttr = cmds.listConnections(c + '.output', p=1)[0]
 if drivenAttr:
     dk.drivenAttr = drivenAttr
 else:
     cmds.warning('No driven attr found for ' + c)

This of course won’t work if you have anything like ‘contextual rigging’ that checks the value of an attr and then uses it to blend between two banks of driven-keys, but because this is all general DG nodes, as soon as you enter nodes by hand, it’s no longer really a vanilla driven-key relationship that’s been set.

If you have a better idea, let me know, this above is just a way I have done it that has been robust, but again, I mainly drive transforms.

 What can you do?

Prune Unused Pasta

pruned

Pruned version of the driven_test.ma DAG

By definition, when rigging something with many driven transforms like a face, you are creating driven-key relationships based on what you MIGHT need. This goes for when making the initial relationship, or in the future when you maybe want to add detail. WILL I NEED to maybe translate an eyelid xform to get a driven pose I want.. MAYBE.. so you find yourself keying rot/trans on *EVERYTHING*. That’s what I did in my example, and the way the Maya SDK tool works, you can’t really choose which attrs per driven node you want to drive, so best to ‘go hunting with a shotgun’ as we say. (shoot all the trees and hope something falls out)

Ok so let’s write some code to identify and prune driven relationships we didn’t use.

CAUTION: I would only ever do this in a ‘publish’ step, where you take your final rig and delete crap to make it faster (or break it) for animators. Also, I have never used this exact code below in production, I just created it while making this blog post as an example. I have run it on some of our production rigs and haven’t noticed anything terrible, but I also haven’t really looked. 😀

def deleteUnusedDriverCurves():
    for driverCurve in cmds.ls(type=("animCurveUL","animCurveUU","animCurveUA","animCurveUT")):
        #delete unused driven curves
        if not [item for item in cmds.keyframe(driverCurve, valueChange=1, q=1) if abs(item) &gt; 0.0]:
            cmds.delete(driverCurve)
 
deleteUnusedDriverCurves()

This deletes any curves that do not have a change in value. You could have multiple keys, but if there’s no curve, let’s get rid of it. Now that we have deleted some curves, we have some blendWeighted nodes that now aren’t blending anything and unitConversions that are worthless creatures. Let’s take care of them:

def deleteUnusedBlendNodes():
    #rewire blend nodes that aren't blending anything
    for blendNode in cmds.ls(type='blendWeighted'):
        if len(cmds.listConnections(blendNode, destination=0)) == 1:
            curveOutput = None
            drivenInput = None
 
            #leap over the unitConversion if it exists and find the driving curve
            curveOutput = cmds.listConnections(blendNode, destination=0, plugs=1, skipConversionNodes=1)[0]
            #leap over the downstream unitConversion if it exists
            conns = cmds.listConnections(blendNode, source=0, plugs=1, skipConversionNodes=1)
            for conn in conns:
                if cmds.nodeType(conn.split('.')[0]) == 'hyperLayout': conns.pop(conns.index(conn))
            if len(conns) == 1:
                drivenInput = conns[0]
            else:
                cmds.warning('BlendWeighted had more than two outputs? Node: ' + blendNode)
 
            #connect the values, delete the blendWeighted
            cmds.connectAttr(curveOutput, drivenInput, force=1)
            cmds.delete(blendNode)
            print 'Removed', blendNode, 'and wired', curveOutput, 'to', drivenInput
 
deleteUnusedBlendNodes()

We find all blendWeighted nodes with only one input, then traverse out from them and directly connect whatever it was the node was blending, then we delete it. This is a bit tricky and I still think I may have missed something because I wrote this example at 2am, but I ran it on some rigs and haven’t seen an issue.

Here are the results running this code on an example rig:

pruned_graph

Create a Tool To Track / Mirror / Select Driven Relationships

This is a prototype I was working on at home but shelved, I would like to pick it up again when I have some time, or would be open to tossing it on github if people would like to help. It’s not rocket science, it’s besically what Maya should have by default. You just want to track the relationships you make, and also select all nodes driven by an attr. Also mirror their transforms across an axis (more geared toward driven transforms).

sdkWrangler

Write Your Own Driver Node

Many places create their own ‘driven node’ that just stores driven relationships. Judd Simantov showed a Maya node that was used on Uncharted2 to store all facial poses/driven relationships:

facePoseNode

The benefits of making your own node are not just in DAG readability, check out the time spent evaluating all these unitConversion and blendWeighted nodes in a complex facial rig (using the new Maya 2015 EXT 1 Profiler tool) –that’s over 760ms! (click to enlarge)

profiler

Though it’s not enough to make a node like this, you need to make a front end to manage it, here’s the front end for this node:

poseFaceUI

Give Autodesk Feedback!

feedback

As the PSD request is ‘under review’, maybe when they make the driver, they can make a more general framework to drive things in Maya.

Conclusion

As you can see, there are many ways to try to manage and contain the mess generated by creating driven-key relationships.  I would like to update this based on feedback, it’s definitely not an overview, and speaks more to people who have been creating driven-key relationships for quite some time.  Also, if you would find a tool like my Maya SDK replacement useful, let me know, especially if you would like to help iron it out and/or test it.

posted by Chris at 3:16 PM  

Tuesday, September 30, 2014

Why Storing String Refs is Dangerous and Irresponsible

Yes! Definitely a clickbait title if there ever was one; but this is important! Some of you contacted me regarding my earlier post (Don’t Use String Paths), where I said you should never use string names to keep track of anything even vaguely important in Maya.

This problem is so fundamental in Maya that the initial Python code test I came up with for the Crytek Technical Art Dept used to be a simple maya node renamer. No joke.

THE PROBLEM

From time to time I see code that attempts reach out into the DAG and grab nodes by concatenating tokens like:

character_side + '_' + character_name + '_arm_' + UID + '_' + type

This is SUPER dangerous. You are basically guessing that an object exists, and betting your whole tool or codebase on this, it’s super fragile.

Let’s use the example of making a turtle, let’s call him ‘Red’, he’s red, and his shell is soft:

cmds.joint(name = 'red' + '_' + shell_hardness + '_' + animal_type + '_hand')
cmds.select('red' + '_' + shell_hardness + '_' + animal_type + '_hand')
# Error: No object matches name: red_soft_turtle_hand
# Traceback (most recent call last):
#   File "", line 1, in 
# ValueError: No object matches name: red_soft_turtle_hand #

First up, you should *never* do this, when you are creating a node with a name you are making from scratch, you *must* first check if it exists. If it exists, Maya will alter the name and you will not be able to find it, your code will immediately break either with the above error, or this one:

cmds.select('red_soft_turtle_hand')
# Error: More than one object matches name: red_soft_turtle_hand
# Traceback (most recent call last):
#   File "", line 1, in 
# ValueError: More than one object matches name: red_soft_turtle_hand #

SLIGHTLY BETTER

So here’s something a bit more safe, but still not recommended in situations where this is important:

hand_jnt = cmds.joint(name = 'red' + '_' + shell_hardness + '_' + animal_type + '_hand')
print hand_jnt
# >>: red_soft_turtle_hand1

Here, we’re storing the node created in a variable, if the name is not the name we expected, we still know how to find the node. This is a lot safer, your code will continue to run, but it’s also not a great way of working. Maya created our joint, but called it ‘red_soft_turtle_hand1’, but, because we stored the node returned from the command creating it into a variable, we can print hand_jnt, and it will return ‘red_soft_turtle_hand1’.

To be super clear, this is more safe because:

  • Maya is resolving any name clashes for you automatically
  • You get the *real* name returned to you from Maya upon node creation
  • If you create a node name that exists elsewhere, it returns you a full path automatically

This is somewhat acceptable in code that’s building a rig. Wham! Bam! You created a node and then did something with it seconds later and never looked back!

STILL A PROBLEM

So let’s say you’re doing the slightly safer way, and you’re storing long paths of nodes which Maya gives you the name of. As soon as you store a long path, it’s stale.  This is what I meant above, the longer you store this information, the less reliable it is.

Let’s look at our joint, its long path is:

|root|pelvis|shell|some_other_joint|arm1|arm2|red_soft_turtle_hand

If *any* of the six parent nodes names change, just a single character! You’re dead in the water. If the hierarchy changes: you’re dead in the water.

If you’re working with someone who defends the above by saying the following

  • “Oh, the hierarchy will never change”
  • “Oh, there will never be a node with the same name”
  • “Oh, no one will ever rename any of these”

The Maya DAG is the wild west.
You WILL have duplicate node names.
You WILL have hierarchy changes.
Professional tools don’t only work with a bunch of caveats,
if it’s a valid Maya scene, your code should work with it:
BE PREPARED.

THE SOLUTIONS

There are clear ways to overcome the challenge above, they’re not secrets, they are actually the professional way Autodesk tells you to do it in the docs, code examples, and their own implementations.

You don’t *have* to work 100% in the API or PyMel, but like we said before, as soon as possible get a pointer to your object. Think of any string path as having a short half-life.

Using The Maya Python API

In C++ when you pass around an object, you have a pointer to it’s location in memory. The name and DAG path can change, but you can at any time get the current name, and path. Fresh, not stale!

import maya.OpenMaya as om
m_dg = om.MFnDagNode()
m_obj = m_dg.create("joint")
m_dagPath = om.MDagPath()
 
if m_obj.hasFn(om.MFn.kDagNode):
    om.MDagPath.getAPathTo( m_obj, m_dagPath )
    print m_dagPath.fullPathName()
# >>: |joint1

Using PyMel

PyMel wraps the API, a PyMel object is a *real* python object that stores the actual MObject, but you don’t have to use the API.

myJoint = joint(name='doesnt_even_matter')
print myJoint.fullPath()
# >>: |root|pelvis|shell|some_other_joint|arm1|arm2|red_soft_turtle_hand|doesnt_even_matter
#In case you never used PyMel, here's some examples of convenience functions:
myJoint.transformationMatrix()
myJoint.inputs()

Notice that PyMel is handing you back python objects, you can ask the object for it’s full path at any time, and it will be fresh data.

Using Message Attrs

Message attributes are how Alias decided users will serialize relationships between nodes. I have removed this section because future-me wrote an entire post dedicated to this (The Mighty Message Attribute).

MAKING A PLAN

The important thing here is that you decide a consistent way of using the above. At Crytek we tried to have all core functions that accepted or passed DG nodes, do the handoff as MObjects. We serialized all important relationships with message attrs, and when there was too much data for that to be efficient, we stamped nodes with metadata.

RETROFITTING

If you have an existing pipeline where you build a lot of string references by concatenating tokens like the example above, you can make a convenience function that will validate your assumptions and deal gracefully with issues. Something like my_utils.get_node(‘my’ + ‘_’ + ‘thing’)

posted by Chris at 3:28 PM  

Tuesday, August 26, 2014

Multi-Resolution Facial Rigging

At SIGGRAPH we discussed a bit about our facial pipeline that we haven’t talked about before. Namely, facial LODs and multi-platform facial rigging.

I would like to start by saying that we spent a _LOT_ of time thinking about facial levels of detail on Ryse, and put a lot of effort into the area. I know this is a long post, but it’s an important one.

run_on_brian

Lowest Common Denominator

As the ‘next-generation’ seems to be largely defined by mult-platform titles it seems valuable to focus on ways to increase fidelity on next generation hardware while still being able to target older hardware specs. That said, I have yet to see a pipeline to do this. Most next gen games have skeletons and animations limited by the lowest common denominator of the last generation, often Playstation 3.

When you wonder why your awesome next gen game doesn’t seem to have character models and animation like next-gen only titles, this is why.

It’s very easy to increase texture resolution by having a pipeline where you author high and bake to lower targets.  It’s more complicated to author meshes high and publish to lower targets, we did this on Crysis 1 and 2, High end PC saw higher mesh resolution characters than Xbox 360. I would say it’s the hardest to make rigs, deformers, and animations for a high spec hardware target and create a process to publish lower fidelity versions. No one wants to have different character skeletons on each hardware platform.

facial_complexity

You Deserve an Explanation

When we released the specs of our faces, people understandably were a bit taken aback.  Why on earth would you need 250 blendshapes if you have 260 joints? This above is actually a slide from our asset creation SIGGRAPH course that resonated very well with the audience.

Let’s take a look at some goals:

  1. Cut-scene fidelity in gameplay at any time- no cut-scene rigs
  2. Up to 70 characters on screen
  3. Able to run on multiple hardware specs

The only way to achieve the first two is through a very aggressive and granular level of detail (LOD) scheme. Once having that LOD system in place, the third item will come for free, as it did on our previous titles. However, as we have LODed meshes and materials, we had never LODed rigs.

On a feature film, we wouldn’t use joints, we would have a largely blendshape-only face.

But this doesn’t LOD well, we need to be able to strip out facial complexity in the distance and on other platforms.

Facial Level of Detail

So to achieve these goals, we must aggressively LOD our character faces.

Let’s generate some new goals:

  • Improve LOD system to allow the swapping or culling of skinned meshes per-mesh, each at hand-tailored distances per-character instance
  • Not only swap meshes, but skinning algorithms, materials, cull blendshapes, etc..
  • One skeleton – all levels of detail stored in one nested hierarchy, disable/reveal joints at different LOD levels, as I mention above, no one wants multiple skeletons
  • One animation set – drives all layers of detail because it’s one hierarchy, only the enabled joints receive animation
  • All facial animations shareable between characters
  • Faces snapped onto bodies at runtime – “Cry parent constraint” of sorts snaps head, neck, spine4, clavs, and upper arms of facial rig to body, allowing dynamic LODing of face irrespective of body.

LOD_hierarchy

One Hierarchy to Rule them All

Before going into the meshes, skinning algorithms, culling, etc.. it’s important to understand the hierarchy of the face. At any given mesh LOD level, there are many joints that are not skinned. Above you see three layers of joints, 9 LOD0, 3 LOD1, and 1 LOD2.

To use a single hierarchy, but have it drive meshes at different levels, you need to accomplish the following:

  • Make sure you have three layers that can drive different facial LODs, we had something like 260/70/15 on Ryse.
  • Each layer must be driven, and able to deform that LOD alone. Meaning when creating rig logic, you must start from the highest LOD and move down the chain. The LOD0 joints above would only be responsible for skinning the details of the face at LOD0, their gross movement comes from their parent, LOD1.

Here you can see the Marius example video from our slides. Notice the ORANGE joints are responsible for gross movement and the YELLOW or GREEN leaf joints just add detail.

jaw_drop_skel

 

Why blendshapes? Isn’t 260 joints enough?

The facial hierarchy and rig is consistent between all characters. The rig logic that drives those joints is changed and tweaked, the skinning is tweaked, but no two faces are identical. the blendshapes serve two main purposes:

1) Get the joint rig back on scan. Whatever the delta from the joint rig to the scan data that is associated with that solved pose from the headcam data, bridge that. This means fat around Nero’s neck, bags under his eyes, his eyebrow region, etc.

2) Add volume where it’s lost due to joint skinning. Areas like the lips, the cheeks, rig functions like lips together, sticky lips, etc, require blendshapes.

nero_corectives

Look at the image above, there just aren’t enough joints in the brow to catch that micro-expression on Nero’s face. It comes through with the blendshape, and he goes from looking like you kicked his dog, to his accusatory surprise when he figures out that you are Damoclese.

A Look Under the Hood: Ryse Facial LODing

Thanks to the hard work of graphics engineer Jerome Charles we were able to granularly LOD our faces. These values are from your buddy Vitallion, who was a hero and could be a bit less aggressive. Many of the barbarians you fight en masse blew through all their blendshapes in 2m not 4m.

Assets / Technologies (LOD)
Distance
CPU skinning, 8 inf, 260 joints, 230 blendshapes, tangent update, 5k  tris across multiple meshes 0-4m
CPU skinning, 8 inf, 260 joints, 3-5k across multiple meshes with small face parts culled 4-7m
GPU skinning, 4 inf, 70 joints, 2k mesh with integrated eyes 7-10m
GPU skinning , 4 inf, <10 joints, <1k mesh 10m+

 

Here’s a different table showing the face mesh parts that we culled and when:

Distance Face parts
4m Eyebrow meshes replaced, baked into facial texture
3m Eyelash geometry culled
3m Eye AO ‘overlay’ layer culled
4m Eye balls removed, replaced with baked in eyes in head mesh
2m Eye ‘water’ miniscus culled
3m Eye tearduct culled
3m Teeth swapped for built-in mesh
3m Tongue swapped for built-in mesh

Why isn’t this standard?

Because it’s very difficult and very complicated, there aren’t so many people in that can pull something like this off. On Ryse we partnered with my friend Vlad at 3Lateral, after 4 months working on the initial Marius facial prototype, he and his team were able to deliver 23 more facial rigs at the same fidelity in just under three months!

But also, there’s the whole discussion about whether the time and effort spent on that last 5% really pays off in the end. Why not just use PS3 facial rigs on all platforms and spend a little more on marketing? It’s happening! And those guys probably aren’t going bankrupt any time soon..  ¬.¬

I am insanely proud of what the team accomplished on Ryse. Facial rigging is nothing without a great bunch of artists, programmers, animators, etc. Here’s some good moments where the performances really come through, these are all the in-game meshes and rigs:

DISCLAIMER: All of the info above and more is publicly available in our SIGGRAPH 2014 course notes.

posted by Chris at 4:40 AM  

Thursday, August 21, 2014

Adding Sublime ‘Build’ Support for KL

fabric_build_kl

I have been dabbling with KL and that Fabric guys have some great introduction videos [here]. They have worked hard on some Sublime integration/highlighting, which you can find on GitHub [here].

While following these intro tutorials, instead of popping back and forth to your command line, you can actually compile/run your code in Sublime and see the results. To do this go to Preferences>Browse Packages… Then open that Sublime-KL folder, and inside create a new file called ‘KL.sublime-build‘, the contents of which should be:

{
"cmd": ["kl", "$file"]
}

Then just select KL from the ‘Build System’ menu as shown above, now press CTRL+B and it will show results inside the Sublime console!

posted by Chris at 2:13 AM  

Monday, June 30, 2014

Wasted Time, Sunken Cost, and Working In a Team

sunk

YOUR APE ANCESTORS

Let’s say that you want to do something, like watch a movie. When you arrive and open your wallet to purchase a 10 dollar ticket, you notice you have lost a 10 dollar bill, the majority of people buy a movie ticket anyway (88%).

Let’s examine a slightly different situation, where you arrive at the theater, but have misplaced your ticket, would you go buy another? Studies show that a majority of people (54%) would not re-purchase a ticket and watch the film. The situations are the same, but in the first, you lost 10 dollars, it wasn’t associated with the movie, in the second, you lost your ticket, 10 dollars that was specifically allotted to that task, and loss sucks.

This is a great example of the Sunk Cost Fallacy. Kahneman and Tversky are two researchers who have spent a lot of their careers looking at loss aversion and decision theory. The bottom line is, it’s human nature that the more you invest in something, the harder it is to abandon it. As a Technical Artist, you will find yourself in a position where you are the decision-maker, don’t let your ape ancestors trick you into making a poor decision.

..since all decisions involve uncertainty about the future the human brain you use to make decisions has evolved an automatic and unconscious system for judging how to proceed when a potential for loss arises. Kahneman says organisms that placed more urgency on avoiding threats than they did on maximizing opportunities were more likely to pass on their genes. So, over time, the prospect of losses has become a more powerful motivator on your behavior than the promise of gains. Whenever possible, you try to avoid losses of any kind, and when comparing losses to gains you don’t treat them equally. – You Are Not So Smart

51809459

IN PRODUCTION

As a Technical Artist in a position to lead or direct a team, you will often be the person signing off tools or features you and your team have requested. How many times have you been in the following situation:

A feature or tool is requested. Joe, a genius ‘lone wolf’ programmer receives the task, he is briefed and told to update the customers periodically or ask them in the case he needs any clarification. Now, sometimes what happens is what my brother likes to call ‘The Grand Reveal’. It’s where, for whatever reason, Joe sits in his corner working hard on a task, not involving anyone, and on the last day he valiantly returns from the mountain top and presents something that is unfortunately neither really what was requested or needed.

In this situation, you get together with his Lead and point out that what is delivered is not what was requested, he will more than likely reply “But Joe spent four weeks on this! Surely we can just use this until Joe can later rework it?”

No, you really can’t. Joe needs to know he works on a team, that people rely on his work. Nothing gets people to deliver what they are supposed to next time like being forced to redo their work. I guarantee you next time Joe will be at your teams desks any time he has a question about the tool or feature he is working on. You know the needs of your project or team, it’s important that you do not compromise those because someone wasted time running off in the wrong direction or has problems working in a team.

I’m sure Joe is a really smart guy, but he’s also now four weeks behind.

 

HOW TO AVOID SINKING CASH IN WASTED EFFORT

Anything that is wasted effort represents wasted time. The best management of our time thus becomes linked inseparably with the best utilization of our efforts.
– Ted Engstrom

CREATE ‘FEATURE BRIEFS’

A Feature Brief is a one page document that serves as a contract between the person requesting a feature and the one implementing it. My Feature Briefs outline three main things:

  1. A short description of the feature or tool
  2. It’s function – how does it work, what are the expected results
  3. It’s justification – why is it needed? What is the problem that is needed to be solved.

It’s important that work not begin until both parties agree on all terminology and requests in the feature brief -again, treat it as a contract. And it’s worth mentioning that Feature Briefs aren’t always needed, but they’re a great way to make sure that goals are clearly defined, everyone’s on the same page, and leave zero wiggle room for interpretation. Here is an example Feature Brief for the first Pose Driver we developed at Crytek.

GATED DEVELOPMENT

Work with Joe’s Lead or Manager to set up ‘Gates’, it’s important that he get the feedback as early as possible if he’s going down the wrong track. I understand that bothering people halfway through a task may not be kosher in Agile development, but never just assume that someone will deliver what you need on the last day of a sprint.

dilbert

Break down the goal into tasks whose progress can be reviewed, it’s important that you, the primary stakeholder, are involved in signing off these gates. Any gated process is only as useful as the person signing off the work, the above comic may seem harsh, but it’s vitally important that the stakeholder is involved in reviewing work. Joe’s manager has a vested interest in Joe moving on to his next tasks, you have a vested interest in the tool or feature being what your team, the company, and whomever else needs.

Perhaps Joe will first present an outline, or maybe after taking a detailed look at the problem, Joe has a better solution he would like to pitch and you all agree to change the Feature Brief. The next gate would be to evaluate a working prototype. You probably know a lot about the feature as you requested it –are there any gotchas, any things that just wont work or have been overlooked? Last is usually a more polished implementation and a user interface.

check_progress

ALWAYS CHECK THE PROGRESS OF EVERYTHING

If Joe has a Lead or Manager, check with them, no need to bother Joe, that’s what the others are there for. If you ask them details about where he’s at, more often than not they will offer for you to speak with him or get you an update. It’s just important to understand that if Joe delivers something that’s not what you need, it’s your fault too. Joe is only a genius in the trenches, it’s your job to make sure that he’s not barking up the wrong tree and wasting company time. It may be tempting, but never allow these guys to shoot themselves in the foot, if you think he’s not on the right track, you need to do something about it. Even without gated development, frequently check the progress of items that are important to you. The atmosphere should be that of a relay race, you are ready to accept the baton, and it needs to be what was agreed upon or you all fail.

hh8ocms9

NEVER SETTLE FOR A HALF-BAKED TEMPORARY SOLUTION YOU CANNOT LIVE WITH

More-often-than-not, whatever Joe did in the time he had allotted is going to be what you ship with. If you agree he will return to address the issues later, make sure that when this doesn’t happen, your team can still be successful. Nothing should be higher priority than a mistake that holds up another team. I am sure you feel this way when it’s your team, when a rig update from last week is causing all gun holster keys to be lost on animation export, it’s important to address that before new work. The same can be said for Joe’s work, don’t make it personal, he is now behind, your guys are relying on him, and it should be high priority for him to deliver the agreed upon work.

posted by Chris at 12:02 AM  

Sunday, May 11, 2014

Maya: Vector Math by Example

BEFORE WE BEGIN

This post is about how to use vector math and trigonometric functions in Maya, it is not a linear algebra or vector math course, it should give you what you need to follow along in Maya while you learn with online materials. Kahn Academy is a great online learning resource for math, and Mathematics for Computer Graphics, and Linear Algebra and its Applications are very good books. Gilbert Strang, the Author of Linear Algebra, has his entire MIT Linear Algebra course lectures here in video form. Also, Volume 2 of Complete Maya Programming has some vector math examples in MEL and C++.

vector_wikipedia

VECTORS

Think of the white vector above as a movement. It does have three scalar values (ax, ay, az), sure, but do not think of a vector as a point or a position. When you see a vector, I believe it helps to imagine it as a movement from 0,0,0 – an origin. We don’t know where it started, we only know the movement.

A vector has been normalized, or is considered a unit vector, when it’s length is one. This is achieved by dividing each component by the length.

VECTOR LIBRARIES

There are many Python libraries dedicated to vector math, but none ship with Python itself. I have tried numPy, then pyEuclid, and finally piMath. It can definitely be a benefit to load the same vector class across multiple apps like Maya, MotionBuilder, etc.. But, I used those in a time when MotionBuilder had no vector class, and before Maya had the API. Today, I use the vector class built into the Maya Python API (2.0), which wraps the underlying Maya C++ code: MVector

I had to call out 2.0 above, as those of you using the old API, you have to ‘cast’ your vectors to/from, meaning that classes like MVector (Maya’s vector class) don’t accept python objects like lists or tuples, this is still the case with the 2014 SWIG implementation of the default API, but not API 2.0. One solution is to override the MVector class in a way that it accepts a Python lists and tuples, essentially automatically casting it for you:

class MVector(om.MVector):
    def __init__(self, *args):
        if( issubclass, args, list ) and len(args[0])== 3:
            om.MVector.__init__(self, args[0][0], args[0][1], args[0][2])
        else:
            om.MVector.__init__(self, args)

But that aside, just use Maya Python API 2.0:

#import API 2.0
import maya.api.OpenMaya as om
#import old API
import maya.OpenMaya as om_old

 

CREATING VECTORS IN MAYA

Let’s first create two cubes, and move them

import maya.cmds as cmds
import maya.api.OpenMaya as om
cube1, cube2 = cmds.polyCube()[0], cmds.polyCube()[0]
cmds.xform(cube2, t=(1,2,3))
cmds.xform(cube1, t=(3,5,2))

Let’s get the translation of each, and store those as MVectors

t1, t2 = cmds.xform(cube1, t=1, q=1), cmds.xform(cube2, t=1, q=1)
print t1,t2
v1, v2 = om.MVector(t1), om.MVector(t2)
print v1, v2

This will return the translation in the form [x, y, z], and also the MVector, which will print: (x, y, z), and in the old API: <__main__.MVector; proxy of <Swig Object of type ‘MVector *’ at 0x000000002941D2D0> >. This is a SWIG wrapped C++ object, API 2.0 prints the vector.

Note: I just told you to think of vectors as a movement, and not as a position, and the first example I give stores translation in a vector. Maybe not the best, but remember this translation, is really being stored as a movement in space from an origin.

So let’s start doing stuff and things.
 

LENGTH / DISTANCE / MAGNITUDE

We have two translations, both stored as vectors, let’s get the distance between them, to do this, we want to make a new vector that describes a ray from one location to the other and then find it’s length, or magnitude. To do this we subtract each component of v1 from v2:

v = v2-v1
print v

This results in ‘-2.0 -3.0 1.0’.

To get the length of the vector we actually get the square root of the sum of x,y,and z squared sqrt(x^2+y^2+z^2), but as we haven’t covered the math module yet, let’s just ask the MVector for the ‘length’:

print om.MVector(v2-v1).length()

This will return 3.74165738677, which, if you snap a measure tool on the cubes, you can verify:

distance

Use Case: Distance Check

As every joint in a hierarchy is in it’s parent space, a joint’s ‘magnitude’ is it’s length. Let’s create a lot of joints, then select them by joint length.

import maya.cmds as cmds
import random as r
import maya.api.OpenMaya as om
 
root = cmds.joint()
jnts = []
 
for i in range(0, 2000):
    cmds.select(cl=1)
    jnt = cmds.joint()
    trans = (r.randrange(-100,100), r.randrange(-100,100), r.randrange(-100,100))
    cmds.xform(jnt, t=trans)
    jnts.append(jnt)
 
cmds.parent(jnts, root)

joint_dist

So we’ve created this cloud of joints, but let’s just select those joints with a joint length of less than 50.

sel = []
for jnt in jnts:
    v = om.MVector(cmds.xform(jnt, t=1, q=1))
    if v.length() < 50: sel.append(jnt)
 
cmds.select(sel)

 

DOT PRODUCT / ANGLE BETWEEN TWO VECTORS

The dot product is a scalar value obtained by performing a specific operation on two vector components. This doesn’t make much sense, so I will tell you that the dot product is extremely useful in finding the angle between two vectors, or checking which general direction something is pointing.

dot = v1*v2
print dot

USE CASE: Direction Test

direction

The dot product of two normalized vectors will always be between -1.0 and 1.0, if the dot product is greater than zero, the vectors are pointing in the same general direction, zero means they are perpendicular, less than zero means opposite directions. So let’s loop through our joints and select those that are facing the x direction:

sel = []
for jnt in jnts:
    v = om.MVector(cmds.xform(jnt, t=1, q=1)).normal()
    dot = v*om.MVector([1,0,0])
    if dot > 0.7: sel.append(jnt)
cmds.select(sel)

USE CASE: Test World Colinearity

This one comes from last week in the office, one of my guys wanted to know how to check which way in the world something was facing. I believe it was to derive some information from arbitrary skeletons. This builds on the above by getting each vector of a node in world space.

def getLocalVecToWorldSpace(node, vec=om.MVector.kXaxisVector):
    matrix = om.MGlobal.getSelectionListByName(node).getDagPath(0).inclusiveMatrix()
    vec = (vec * matrix).normal()
    return vec
 
 
def axisVectorColinearity(node, vec):
    vec = om.MVector(vec)
 
    x = getLocalVecToWorldSpace(node, vec=om.MVector.kXaxisVector)
    y = getLocalVecToWorldSpace(node, vec=om.MVector.kYaxisVector)
    z = getLocalVecToWorldSpace(node, vec=om.MVector.kZaxisVector)
 
    #return the dot products
    return {'x': vec*x, 'y':vec*y, 'z':vec*z}
 
jnt = cmds.joint()
print axisVectorColinearity(jnt, [0,0,1])

You can rotate the joint around and you will see which axis is most closely pointing to the world space vector you have given as an input.

USE CASE: Angle Between Vectors

angle

When working with unit vectors, we can get the arc cosine of a dot product to derive the angle between the two vectors, but this requires trigonometric functions, which are not available in our vector class, for this we must import the math module. Scratching the code above, let’s find the angle between two joints:

import maya.cmds as cmds
import maya.api.OpenMaya as om
import math
 
jnt1 = cmds.joint()
cmds.select(cl=1)
jnt2 = cmds.joint()
cmds.xform(jnt2, t=(0,0,10))
cmds.xform(jnt1, t=(10,0,0))
cmds.select(cl=1)
root = cmds.joint()
cmds.parent([jnt1, jnt2], root)
 
v1 = om.MVector(cmds.xform(jnt1, t=1, q=1)).normal()
v2 = om.MVector(cmds.xform(jnt2, t=1, q=1)).normal()
 
dot = v1*v2
print dot
print math.acos(dot)
print math.acos(dot) * 180 / math.pi

So at the end here, the arc Cosine of the dot product returns the angle in radians (1.57079632679), which we convert to degrees by multiplying it by 180 and dividing by pi (90.0). To check your work, there is no angle tool in Maya, but you can create a circle shape and set the sweep degrees to your result.

Now that you know how to convert radians to an angle, if you store the result of the above in an MAngle class, you can ask for it however you like:

print om.MAngle(math.acos(dot)).asDegrees()

Now that you know how to do this, there is an even easier, using the angle function of the MVector class, you can ask it the angle given a second vector:

print v1.angle(v2)

There are also useful attributes v1.rotateBy(r,r,r) for an offset and v1.rotateTo(v2). I say (r,r,r) in my example, but the rotateBy attr takes angles or radians.

CHALLENGE: Can you write your own rad_to_deg and deg_to_rad utility methods?

USE CASE: Orient-Driver

poseDriver
Moving along, let’s apply these concepts to something more interesting. Let’s drive a blendshape based on orientation of a joint. Since the dot product is a scalar value, we can pipe this right into a blendshape, when the dot product is 1.0, we know that the orientations match, when it’s 0, we know they are perpendicular.

vecPoseDriver

We will use a locator constrained to the child to help in deriving a vector. The fourByFourMatrix stores the original position of the locator. I tried using the holdMatrix node, which should store a cached version of the original locator matrix, but it kept updating. (?) We use the vectorProduct node in ‘dot product’ mode to get the dot product of the original vector and the current vector of the joint. We then pipe this value into the weight of the blendshape node.

Now, this simple example doesn’t take twist into account, and we aren’t setting a falloff or cone, the falloff will be 1.0 when the vectors align and the blendshape is on 100% and 0.0, when they’re perpendicular and the blendshape will be on 0%. I also don’t clamp the dot product, so the blendshape input can go to -1.
 

CROSS PRODUCT / PERPENDICULAR VECTOR TO TWO VECTORS

The cross product results in a vector that is perpendicular to two vectors. Generally you would do (v1.y*v2.z-v1.z*v2.y, v1.z*v2.x-v1.x*v2.z, v1.x*v2.y-v1.y*v2.x), ut luckily, the vector class manages this for us by using the ‘^’ symbol:

cross = v1^v2
print cross

USE CASE: Building a coordinate frame

crossProduct

If we get the cross product of v1^v2 above, and use this vector to now cross (v1 x v2)x v1, we will now have a third perpendicular vector to build a coordinate system or ‘orthonormal basis’. A useful example would be aligning a node to a nurbs curve using the pointOnCurveInfo node.

crossProduct

In the example above, we are using two cross products to build a matrix from the tangent of the pointOnCurveInfo and it’s position, then decomposing this matrix to set the orientation and position of a locator.



Many people put content like this behind a paywall.
If you found this useful, please consider buying me a beer.

posted by Chris at 11:49 PM  

Monday, February 11, 2013

Object Oriented Python in Maya Pt. 1

I have written many tools at different companies, I taught myself, and don’t have a CS degree. I enjoy shot-sculpting, skinning, and have been known to tweak parameters of on-screen visuals for hours; I don’t consider myself a ‘coder’; still can’t allocate my own memory.  I feel I haven’t really used OOP from an architecture standpoint. So I bought an OOP book, and set out on a journey of self improvement.

‘OOP’ In Maya

In Maya, you often use DG nodes as ‘objects’. At Crytek we have our own modular nodes that create meta-frameworks encapsulating the character pipeline at multiple levels (characters, characterParts, and rigParts). Without knowing it, we were using Object Oriented Analysis when designing our frameworks, and even had some charts that look quite a bit like UML. DG node networks are connected often with message nodes, this is akin to a pointer to the object in memory, whereas with a python ‘object’ I felt it could always easily lose it’s mapping to the scene.

It is possible now with the OpenMaya C++ API to store a pointer to the DG node in memory and just request the full dag path any time you want it, also PyMel objects are Python classes and link to the DG node even when the string name changes.

“John is 47 Years Old and 6 Feet Tall”

Classes always seemed great for times when I had a bunch of data objects, the classic uses are books in a library, or customers: John is 47 years old and likes the color purple. Awesome. However, in Maya, all our data is in nodes already, and those nodes have attributes, those attributes serialize into a Maya file when I save: so I never really felt the need to use classes.

Although, all this ‘getting’, ‘setting’ and ‘listing’ really grows tiresome, even when you have custom methods to do it fairly easily.

It was difficult to find any really useful examples of OOP with classes in Maya. Most of our code is for ‘constructing’: building a rig, building a window, etc. Code that runs in a linear fashion and does ‘stuff’. There’s no huge architecture, the architecture is Maya itself.

Class Warfare

I wanted to package my information in classes and pass that back and forth in a more elegant way –at all times, not just while constructing things. So for classes to be useful to me, I needed them to synchronously exist with DG nodes.

I also didn’t want to have to get and set the information when syncing the classes with DG nodes, that kind of defeats the purpose of Python classes IMO.

Any time I opened a tool I would ‘wrap’ DG nodes in classes that harnessed the power of Python and OOP. To do this meant diving into more of the deep end, but since that was what was useful to me, that’s what I want to talk about here.

To demonstrate, let’s construct this example:

#the setup
loc = cmds.spaceLocator()
cons = [cmds.circle()[0], cmds.circle()[0]]
meshes = [cmds.sphere()[0], cmds.sphere()[0], cmds.sphere()[0]]
cmds.addAttr(loc, sn='controllers', at='message')
cmds.addAttr(cons, sn='rigging', at='message')
for con in cons: cmds.connectAttr(loc[0] + '.controllers', con + '.rigging')
cmds.addAttr(loc, sn='rendermeshes', at='message')
cmds.addAttr(meshes, sn='rendermesh', at='message')
for mesh in meshes: cmds.connectAttr(loc[0] + '.rendermeshes', mesh + '.rendermesh')

So now we have this little node network:

node_network

Now if I wanted to wrap this network in a class. We are going to use @property to give us the functionality of an attribute, but really a method that runs to return us a value (from the DG node) when the ‘attribute’ is queried. I believe using properties is key to harnessing the power of classes in Maya.

class GameThing(object):
	def __init__(self, node):
		self.node = node
 
	#controllers
	@property
	def controllers(self):
		return cmds.listConnections(self.node + ".controllers")

So now we can query the ‘controllers’ attribute/property, and it returns our controllers:

test = GameThing(loc)
print test.controllers
##>>[u'nurbsCircle2', u'nurbsCircle1']

Next up, we add a setter, which runs code when you set a property ‘attribute’:

class GameThing(object):
	def __init__(self, node):
		self.node = node
 
	#controllers
	@property
	def controllers(self):
		return cmds.listConnections(self.node + ".controllers")
 
	@controllers.setter
	def controllers(self, cons):
		#disconnect existing controller connections
		for con in cmds.listConnections(self.node + '.controllers'):
			cmds.disconnectAttr(self.node + '.controllers', con + '.rigging')
 
		for con in cons:
			if cmds.objExists(con):
				if not cmds.attributeQuery('rigging', n=con, ex=1):
					cmds.addAttr(con, longName='rigging', attributeType='message', s=1)
				cmds.connectAttr((self.node + '.controllers'), (con + '.rigging'), f=1)
			else:
				cmds.error(con + ' does not exist!')

So now when we set the ‘controllers’ attribute/property, it runs a method that blows away all current message connections and adds new ones connecting your cons:

test = GameThing(loc)
print test.controllers
##>>[u'nurbsCircle2', u'nurbsCircle1']
test.controllers = [cmds.nurbsSquare()[0]]
print test.controllers
##>>[u'nurbsSquare1']

To me, something like properties makes classes infinitely more useful in Maya. For a short time we tried to engineer a DG node at Crytek that when an attr changed, could eval a string with a similar name on the node. This is essentially what a property can do, and it’s pretty powerful. Take a moment to look through code of some of the real ‘heavy lifters’ in the field, like zooToolBox, and you’ll see @property all over the place.

I hope you found this as useful as I would have.

posted by Chris at 1:03 AM  

Friday, January 18, 2013

Moving to ‘Physically-Based’ Shading

damo_engine

At the SIGGRAPH Autodesk User Group we spoke a lot about our character technology and switch to Maya. One area that we haven’t spoken so much about is next-gen updates to our shading and material pipeline, however Nicolas and I have an interview out in Making Games where we talk about that in detail publicly for the first time, so I can mention it here. One of the reasons we have really focused on character technology is that it touches so many departments and is a very difficult issue to crack, at Crytek we have a strong history of lighting and rendering.

What is ‘Physically-Based’ Shading?

The first time I ever encountered a physically-based pipeline was when working at ILM. The guys had gotten tired of having to create different light setups and materials per shot or per sequence. Moving to a more physically-based shading model would mean that we could not waste so much time re-lighting and tweaking materials, but also get a more natural, better initial result -quicker. [Ben Snow’s 2010 PBR SIGGRAPH Course Slides]

WHAT IS MEANT BY ‘PHYSICAL’

http://myphysicswebschool.blogspot.de/

image credit: http://myphysicswebschool.blogspot.de/

A physically based shading model reacts much more like real world light simulation, one of the biggest differences is that the amount of reflected light can never be more than the incoming amount that hit the surface, older lighting models tended to have overly bright and overly broad specular highlights. With the Lambert/Blinn-Phong model it was possible to have many situations where a material emitted more light than it received. An interesting caveat of physically-based shading is that the user no longer has control over the specular response (more under ‘Difficult Transition’ below). Because the way light behaves is much more realistic and natural, materials authored for this shading model work equally well in all lighting environments.

Geek Stuff:‘Energy conservation’ is a term that you might hear often used in conjunction with physically-based lighting, here’s a quote from the SIGGRAPH ’96 course notes that I always thought was a perfect explanation of reflected diffuse and specular energy:

“When light hits an object, the energy is reflected as one of two components; the specular component (the shiny highlight) and the diffuse (the color of the object). The relationship of these two components is what defines what kind of material the object is. These two kinds of energy make up the 100% of light reflected off an object. If 95% of it is diffuse energy, then the remaining 5% is specular energy. When the specularity increases, the diffuse component drops, and vice versa. A ping pong ball is considered to be a very diffuse object, with very little specularity and lots of diffuse, and a mirror is thought of as having a very high specularity, and almost no diffuse.”

PHYSICALLY- PLAUSIBLE

It’s important to understand that everything is a hack, whether it’s V-Ray or a game engine, we are just talking about different levels of hackery. Game engines often take the cake for approximations and hacks, one of my guys once said ‘Some people just remove spec maps from their pipeline and all the sudden they’re ‘physically-based”. It’s not just the way our renderers simulate light that is an approximation, but it’s important to remember that we feed the shading model with physically plausible data as well, when you make an asset, you are making a material that is trying to mimic certain physical characteristics.

DIFFICULT TRANSITION

Once physics get involved, you can cheat much less, and in film we cheeeeeaaat. Big time. Ben Snow, the VFX Supe who ushered in the change to a physically-based pipeline at ILM was quoted in VFXPro as saying: “The move to the new [pipeline] did spark somewhat of a holy war at ILM.” I mentioned before that the artist loses control of the specular response, in general, artists don’t like losing control, or adopting new ways of doing things.

WHY IT IS IMPORTANT FOR GAMES AND REAL-TIME RENDERING

Aside from the more natural lighting and rendering, in an environment where the player determines the camera, and often the lighting, it’s important that materials work under all possible lighting scenarios. As the product Manager of Cinebox, I was constantly having our renderer compared to Mental Ray, PRMAN and others, the team added BRDF support and paved the way for physically-based rendering which we hope to ship in 2013 with Ryse.

microcompare05

General Overview for Artists

At Crytek, we have always added great rendering features, but never really took a hard focus on consistency in shading and lighting. Like ILM in my example above, we often tweaked materials for the lighting environment they were to be placed in.

GENERAL RULES / MATERIAL TYPES

Before we start talking about the different maps and material properties, you should know that in a physically-based pipeline you will have two slightly different workflows, one for metals, and one for non-metals. This is more about creating materials that have physically plausible values.

Metals:

  • The specular color for metal should always be above sRGB 180
  • Metal can have colored specular highlights (for gold and copper for example)
  • Metal has a black or very dark diffuse color, because metals absorb all light that enters underneath the surface, they have no ‘diffuse reflection’

Non-Metals:

  • Non-metal has monochrome/gray specular color. Never use colored specular for anything except certain metals
  • The sRGB color range for most non-metal materials is usually between 40 and 60. It should never be higher than 80/80/80
  • A good clean diffuse map is required

GLOSS

gloss_chart

At Crytek, we call the map that determines the roughness the ‘gloss map’, it’s actually the inverse roughness, but we found this easier to author. This is by far one of the most important maps as it determines the size and intensity of specular highlights, but also the contrast of the cube map reflection as you see above.  A good detail normal map can make a surface feel like it has a certain ‘roughness’, but you should start thinking about the gloss map as adding a ‘microscale roughness’. Look above at how as the roughness increases, as does the breadth of the specular highlight. Here is an example from our CryENGINE documentation that was written for Ryse:

click to enlarge

click to enlarge

click to enlarge

click to enlarge

DIFFUSE COLOR

Your diffuse map should be a texture with no lighting information at all. Think a light with a value of ‘100’ shining directly onto a polygon with your texture. There should be no shadow or AO information in your diffuse map. As stated above, a metal should have a completely black diffuse color.

Geek Stuff: Diffuse can also be reffered to as ‘albedo‘, the albedo is the measure of diffuse reflectivity. This term is primarily used to scare artists.

SPECULAR COLOR

As previously discussed, non-metals should only have monochrome/gray-scale specular color maps. Specular color is a real-world physical value and your map should be basically flat color, you should use existing values and not induce noise or variation, the spec color map is not a place to be artistic, it stores real-world values. You can find many tables online that have plausible color values for specular color, here is an example:

Material sRGB Color Linear (Blend Layer)
Water 38 38 38 0.02
Skin 51 51 51 0.03
Hair 65 65 65 0.05
Plastic / Glass (Low) 53 53 53 0.03
Plastic High 61 61 61 0.05
Glass (High) / Ruby 79 79 79 0.08
Diamond 115 115 115 0.17
Iron 196 199 199 0.57
Copper 250 209 194 N/A
Gold 255 219 145 N/A
Aluminum 245 245 247 0.91
Silver 250 247 242 N/A
If a non-metal material is not in the list, use a value between 45 and 65.

Geek Stuff: SPECULAR IS EVERYWHERE: In 2010, John Hable did a great post showing the specular characteristics of a cotton t-shirt and other materials that you wouldn’t usually consider having specular.

EXAMPLE ASSET:

Here you can see the maps that generate this worn, oxidized lion sculpture.

rust

click to enlarge

rust2

EXAMPLES IN AN ENVIRONMENT

640x

See above how there are no variations in the specular color map? See how the copper items on the left have a black diffuse texture? Notice there is no variation in the solid colors of the specular color maps.

SETTING UP PHOTOSHOP color_settings In order to create assets properly, we need to set up our content creation software properly, in this case: Photoshop. If you go to Edit>Color Settings… Set the dialog like the above. It’s important that you author textures in sRGB

Geek Stuff: We author in sRGB because it gives us more precision in darker colors, and reduces banding artifacts. The eye has 4.5 million cones that can perceive color, but 90 million rods that perceive luminance changes. Humans are much more perceptive to contrast changes than color changes!

Taking the Leap: Tips for Leads and Directors

New technologies that require paradigm shifts in how people work or how they think about reaching an end artistic result can be difficult to integrate into a pipeline. At Crytek I am the Lead/Director in charge of the team that is making that initial shift to physically-based lighting, I also lead the reference trip, and managed the hardware requests to get key artists on calibrated wide gamut display devices. I am just saying this to put the next items in some kind of context.

QUICK FEEDBACK AND ITERATION

It’s very important that your team be able to test their assets in multiple lighting conditions. The easiest route is to make a test level where you can cycle lighting conditions from many different game levels, or sampled lighting from multiple points in the game. The default light in this level should be broad daylight IMO, as it’s the hardest to get right.

USE EXAMPLE ASSETS

I created one of the first example assets for the physically based pipeline. It was a glass inlay table that I had at home, which had wooden, concrete (grout), metal, and multi-colored glass inlay. This asset served as a reference asset for the art team. Try to find an asset that can properly show the guys how to use gloss maps, IMO understanding how roughness effects your asset’s surface characteristics is maybe the biggest challenge when moving to a physically-based pipeline.

TRAIN KEY PERSONNEL

As with rolling out any new feature, you should train a few technically-inclined artists to help their peers along. It’s also good to have the artists give feedback to the graphics team as they begin really cutting their teeth on the system. On Ryse, we are doing the above, but also dedicating a single technical artist to helping with environment art-related technology and profiling.

CHEAT SHEET

It’s very important to have a ‘cheat sheet’, this is a sheet we created on the Ryse team to allow an artist to use the color picker to sample common ‘plausible’ values.

SPEC_Range_new.bmp

click to enlarge

HELP PEOPLE HELP THEMSELVES

We have created a debug view that highlights assets whose specular color was not in a physically-plausible range. We are very in favor of making tools to help people be responsible, and validate/highlight work that is not. We also allowed people to set solid specular values in the shader to limit memory consumption on simple assets.

CALIBRATION AND REFERENCE ACQUISITION

calibrate

Above are two things that I actually carry with me everywhere I go. The X-Rite ColorChecker Passport, and the Pantone Huey Pro monitor calibration toolset. Both are very small, and can be carried in a laptop bag. I will go into reference data acquisition in another post. On Ryse we significantly upgraded our reference acquisition pipeline and scanned a lot of objects/surfaces in the field.

 

TECHNICAL IMPROVEMENTS BASED ON PRODUCTION USE

Nicolas Shulz has presented many improvements made based on production use at GDC 2014. His slides are here. He details things like the importance of specular filtering on to preserve highlights as objects recede into the distance, and why we decided to couple normals and roughness.

UPDATE: We’ve now shipped Ryse, I have tried to update the post a little.  I was the invited speaker at HPG 2014, where I touched on this topic a bit and can now update this post with some details and images. (Tips for Leads and Directors) Nicolas also spoke at GDC 2014 and I have linked to his slides above. Though this post focuses on environments, in the end, with the amount of armor on characters, the PBR pipeline was really showcased everywhere. Here’s an image of multiple passes of Marius’ final armor:

marus_breackUp

click to enlarge

posted by Chris at 7:26 PM  

Monday, July 16, 2012

CINEBOX SIGGRAPH Talk and Studio Workshops

CRYENGINE CINEBOX

I am giving a talk at SIGGRAPH 2012 entitled ‘Film/Game Convergence: What’s Taking So Long?‘ where I discuss the inherent differences between games and film and go over a few case studies of projects that attempted to use a game engine for film previs. I also talk a bit about the development of our CINEBOX application, the decisions we had to make, and how we dealt with many of the issues previous attempts have run into.

STUDIO WORKSHOPS

I will be giving two more Studio Workshops this year, the first is a followup to last year’s Introduction to Python, entitled ‘Python Scripting in Maya‘. The other workshop is ‘Building a Game Level‘, which is the same basic workshop I gave last year where I show people how to make a playable game level in CryEngine in an hour. Studio Workshops are hands-on sessions where each attendee has a computer and follows along with the instructor. It’s a great chance for people of all ages to learn new things.

posted by admin at 8:06 PM  

Tuesday, October 25, 2011

Quick Note About Range(), Modulus, and Step

Maybe it’s me, but I often find myself parsing weird ascii text files from others. Sometimes the authors knew what the data was and there’s no real markup. Take this joint list for example:

143 # bones
root ground
-1
0 0 0
root hips
0
0 0.9512207 6E-08
spine 1
1
4E-08 0.9522207 1.4E-07
spine 2
2
3E-07 1.0324 8.3E-07
spine 3
3
5.6E-07 1.11357 1.53E-06
spine 4
4
8.2E-07 1.194749 2.22E-06
head neck lower

So the first line is the number of joints then it begins in three line intervals stating from the root outwards: joint name, parent integer, position. I used to make a pretty obtuse loop using a modulus operator. Basically, modulus is the remainder left over after division. So X%Y gives you the remainder of X divided by Y; here’s an example:

for i in range(0,20+1):
	if i%2 == 0: print i
#>> 0
#>> 2
#>> 4
#>> 6
#>> 8
#>> 10

The smart guys out there see where this is goin.. so I never knew range had a ‘step’ argument. (Or I believe I did, I think I actually had this epiphany maybe two years ago, but my memory is that bad.) So parsing the above is as simple as this:

jnts = []
for i in range(1,numJnts*3+1,3):
	jnt = lines[i].strip()
	parent = int(lines[i+1].strip())
	posSplit = lines[i+2].strip().split(' ')
	pos = (float(posSplit[0])*jointScale, \
	float(posSplit[1])*jointScale, float(posSplit[2])*jointScale)
	jnts.append([jnt, parent, pos])

Thanks to phuuchai on #python (efnet) for nudging me to RTFM!

posted by admin at 1:42 AM  

Wednesday, October 12, 2011

SIGGRAPH 2011: Intro To Python Course

I gave a workshop/talk at SIGGRAPH geared toward introducing people to Python. There were ~25 people on PCs following along, and awkwardly enough, many more than that standing and watching. I prefaced my talk with the fact that I am self-taught and by no means an expert. That said, I have created many python tools people use every day at industry-leading companies.

Starting from zero, in the next hour I aimed to not only introduce them to Python, but get them doing cool, usable things like:

  • Iterating through batches/lists
  • Reading / writing data to excel files
  • Wrangling data from one format to another in order to create a ‘tag cloud’

Many people have asked for the notes, and I only had rough notes. I love Python, and I work with this stuff every day, so I have had to really go back and flesh out some of what I talked about. This tutorial has a lot less of the general chit-chat and information. I apologize for that.

Installation / Environment Check


Let’s check to see that you have the tools properly installed. If you open the command prompt and type ‘python’ you should see this:

So Python is correctly installed, for the following you can either follow along in the cmd window (more difficult) or in IDLE, the IDE that python ships with (easier). This can be found by typing IDLE into the start menu:

Variables


Variables are pieces of information you store in memory, I will talk a bit about different types of variables.

Strings

Strings are pieces of text. I assume you know that, so let’s just go over some quick things:

string = 'this is a string'
print string
#>>this is a string
num = '3.1415'
print num
#>>3.1415

One thing to keep in mind, the above is a string, not a number. You can see this by:

print num + 2
#>>Traceback (most recent call last):
#>>  File "basics_variables.py", line 5, in
#>>    print num + 2
#>>TypeError: cannot concatenate 'str' and 'int' objects

Python is telling you that you cannot add a number to a string of text. It does not know that ‘3.1415’ is a number. So let’s convert it to a number, this is called ‘casting’, we will ‘cast’ the string into a float and back:

print float(num) + 2
#>>5.1415
print str(float(num) + 2) + ' addme'
#>>5.1415 addme

Lists

Lists are the simplest ways to store pieces of data. Let’s make one by breaking up a string:

txt = 'jan tony senta michael brendon phillip jonathon mark'
names = txt.split(' ')
print names
#>>['jan', 'tony', 'senta', 'michael', 'brendon', 'phillip', 'jonathon', 'mark']
for item in names: print item
#>>jan
#>>tony
#>>senta
#>>michael
...

Split breaks up a string into pieces. You tell it what to break on, above, I told it to break on spaces txt.split(‘ ‘). So all the people are stored in a List, which is like an Array or Collection in some other languages.
You can call up the item by it’s number starting with zero:

print names[0], names[5]
#>>jan phillip

TIP: [-1] index will return the last item in an array, here’s a quick way to get a file from a path:

path = 'D:\\data\\dx11_PC_(110)_05_09\\Tools\\CryMaxInstaller.exe'
print path.split('\\')[-1]
#>>CryMaxInstaller.exe

Dictionaries

These store keys, and the keys reference different values. Let’s make one:

dict = {'sascha':'tech artist', 'harry': 142.1, 'sean':False}
print dict['sean']
#>>False

So this is good, but these are just the keys, we need to know the values. Here’s another way to do this, using .keys()

dict = {'sascha':'tech artist', 'harry': 142.1, 'sean':False}
for key in dict.keys(): print key, 'is', dict[key]
#>>sean is False
#>>sascha is tech artist
#>>harry is 142.1

So, dictionaries are a good way to store simple relationships of key and value pairs. In case you hadn’t notices, I used some ‘floats’ and ‘ints’ above. A float is a number with a decimal, like 3.1415, and an ‘int’ is a whole number like 10.

Creating Methods (Functions)


A method or function is like a little tool that you make. These building blocks work together to make your program.

Let’s say that you have to do something many times, you want to re-use this code and not copy/paste it all over. Let’s use the example above of names, let’s make a function that takes a big string of names and returns an ordered list:

def myFunc(input):
	people = input.split(' ')
	people = sorted(people)
	return people
txt = 'jan tony senta michael brendon phillip jonathon mark'
orderedList = myFunc(txt)
print orderedList
#>>['brendon', 'jan', 'jonathon', 'mark', 'michael', 'phillip', 'senta', 'tony']

Basic Example: Create A Tag Cloud From an Excel Document


So we have an excel sheet, and we want to turn it into a hip ‘tag cloud’ to get people’s attention.
If we go to http://www.wordle.net/ you will see that in order to create a tag cloud, we need to feed it the sentences multiple times, and we need to put a tilde in between the words of the sentence. We can automate this with Python!

First, download the excel sheet from me here: [info.csv] The CSV filetype is a great way to read/write docs easily that you can give to others, they load in excel easily.

file = 'C:\\Users\\chris\\Desktop\\intro_to_python\\info.csv'
f = open(file, 'r')
lines = f.readlines()
f.close()
print lines
#>> ['always late to work,13\n', 'does not respect others,1\n', 'does not check work properly,5\n', 'does not plan properly,4\n', 'ignores standards/conventions,3\n']

‘\n’ is a line break character, it means ‘new line’, we want to get rid of that, we also want to just store the items, and how many times they were listed.

file = 'C:\\Users\\chris\\Desktop\\intro_to_python\\info.csv'
f = open(file, 'r')
lines = f.readlines()
f.close()
dict = {}
for line in lines:
	split = line.strip().replace(' ','~').split(',')
	dict[split[0]] = int(split[1])
print dict
#>>{'ignores~standards/conventions': 3, 'does~not~respect~others': 1, 'does~not~plan~properly': 4, 'does~not~check~work~properly': 5, 'always~late~to~work': 13}

Now we have the data in memory in an easily readable way, let’s write it out to disk.

output = ''
for key in dict.keys():
	for i in range(0,dict[key]): output += (key + '\n')
f = open('C:\\Users\\chris\\Desktop\\intro_to_python\\test.txt', 'w')
f.write(output)
f.close()


There we go. In one hour you have learned to:

  • Read and write excel files
  • Iterate over data
  • Convert data sets into new formats
  • Write, read and alter ascii files

If you have any questions, or I left out any parts of the presentation you liked, reply here and I will get back to you.

posted by admin at 5:12 AM  

Monday, October 4, 2010

Writing Custom Perforce Plugins in Python

I recently wrote a custom tool to diff CryEngine layer files in P4, and was surprised how simple it was. What follows is a quick tutorial on adding custom python tools to Perforce.

Start by heading over to Tools>Manage Custom Tools… Then click ‘New’:

You can pass a lot of information to an external tool, here is a detailed rundown. As you see above, we pass the client spec (local) file name (%f) to a python script, let’s create a new script called ‘custom_tool.py’:

import sys
from PyQt4 import QtGui    
 
class custom_tool(QtGui.QMessageBox):
	def __init__(self, parent=None):
		QtGui.QMessageBox.__init__(self)
		self.setDetailedText(str(sys.argv))
		self.show()
 
if __name__ == "__main__":
	app = QtGui.QApplication(sys.argv)
	theTool = custom_tool()
	theTool.show()
	sys.exit(app.exec_())

What this does is simply spits out the sys.argv in a way you can see it. So now you can feed any file you right click in Perforce into a python script:

If you would like to actually do something with a file or revision on the server and are passing the %F flag to get the depot file path, you then need to use p4 print to redirect the file contents (non-binary) to a local file:

p4.run_print('-q', '-o', depotFile, localFile)
posted by admin at 1:09 AM  

Thursday, August 26, 2010

Perforce Triggers in Python (Pt 1)

Perforce is a wily beast. A lot of companies use it, but I feel few outside of the IT department really have to deal with it much. As I work myself deeper and deeper into the damp hole that is asset validation, I have really been writing a lot of python to deal with certain issues; but always scripts that work from the outside.

Perforce has a system that allows you to write scripts that are run, server side, when any number of events are triggered. You can use many scripting languages, but I will only touch on Python.

Test Environment

To follow along here, you should set up a test environment. Perforce is freely downloadable, and free to use with 2 users. Of course you are going to need python, and p4python. So get your server running and add two users, a user and an administrator.

Your First Trigger

Let’s create the simplest python script. It will be a submit trigger that says ‘Hello World’ then passes or fails. If it passes, the item will be checked in to perforce, if it fails, it will not. exiting while returning a ‘1’ is considered a fail, ‘0’ a pass.

print 'Hello World!'
print 'No checkin for you!'
sys.exit(1)

Ok, so save this file as hello_trigger.py. Now go to a command line and enter ‘p4 triggers’ this will open a text document, edit that document to point to your trigger, like so (but point to the location of your script on disk):

Triggers:
	hello_trigger change-submit //depot/... "python X:/projects/2010/p4/hello_trigger.py"

Close/save the trigger TMP file, you should see ‘Triggers saved.’ echo’d at the prompt. Now, when we try to submit a file to the depot, we will get this:

So: awesome, you just DENIED your first check-in!

Connecting to Perforce from Inside a Trigger

So we are now denying check-ins, but let’s try to do some other things, let’s connect to perforce from inside a trigger.

from P4 import P4, P4Exception
 
p4 = P4()
 
try:
	#use whatever your admin l/p was
	#this isn't the safest, but it works at this beginner level
	p4.user = "admin"
	p4.password = "admin"
	p4.port = "1666"
	p4.connect()
	info = p4.run("info")
	print info
	sys.exit(1)
 
#this will return any errors
except P4Exception:
	for e in p4.errors: print e
	sys.exit(1)

So now when you try to submit a file to depot you will get this:

Passing Info to the Trigger

Now we are running triggers, accepting or denying checkins, but we really don’t know much about them. Let’s try to get enough info to where we could make a decision about whether or not we want the file to pass validation. Let’s make another python trigger, trigger_test.py, and let’s query something from the perforce server in the submit trigger. To do this we need to edit our trigger file like so:

Triggers:
	test change-submit //depot/... "python X:/projects/2010/p4/test_trigger.py %user% %changelist%"

This will pass the user and changelist number into the python script as an arg, the same way dragging/dropping passed args to python in my previous example. So let’s set that up, save the script from before as ‘test_trigger.py’ as shown above, and add the following:

import sys
from P4 import P4, P4Exception
 
p4 = P4()
describe = []
 
try:
	p4.user = "admin"
	p4.password = "admin"
	p4.port = "1666"
	p4.connect()
 
except P4Exception:
	for e in p4.errors: print e
	sys.exit(1)
 
print str(sys.argv)
describe = p4.run('describe',sys.argv[2])
print str(describe)
 
p4.disconnect()
sys.exit(1)

So, as you can see, it has returned the user and changelist number:

However, for this changelist to be useful, we query p4, asking the server to describe the changelist. This returns a lot of information about the changelist.

Where to Go From here

The few simple things shown here really give you the tools to do many more things. Here are some examples of triggers that can be  created with the know-how above:

  • Deny check-ins of a certain filetype (like deny compiled source files/assets)
  • Deny check-ins whose hash digest matches an existing file on the server
  • Deny/allow a certain type of file check-in from a user in a certain group
  • Email a lead any time a file in a certain folder is updated

Did you find this helpful? What creative triggers have you written?

posted by admin at 12:33 AM  

Monday, June 28, 2010

Python: Simple Decorator Example

In Python, a Decorator is a type of macro that allows you to inject or modify code in functions or classes. I was turned onto this by my friend Matt Chapman at ILM, but never fully grasped the importance.

class myDecorator(object):
	def __init__(self, f):
		self.f = f
	def __call__(self):
		print "Entering", self.f.__name__
		self.f()
		print "Exited", self.f.__name__
 
@myDecorator
def aFunction():
	print "aFunction running"
 
aFunction()

When you run the code above you will see the following:

>>Entering aFunction
>>aFunction running
>>Exited aFunction

So when we call a decorated function, we get a completely different behavior. You can wrap any existing functions, here is an example of wrapping functions for error reporting:

class catchAll:
	def __init__(self, function):
		self.function = function
 
	def __call__(self, *args):
		try:
			return self.function(*args)
		except Exception, e:
			print "Error: %s" % (e)
 
@catchAll
def unsafe(x):
  return 1 / x
 
print "unsafe(1): ", unsafe(1)
print "unsafe(0): ", unsafe(0)

So when we run this and divide by zero we get:

unsafe(1):  1
unsafe(0):  Error: integer division or modulo by zero

Using decorators you can make sweeping changes to existing code with minimal effort, like the error reporting function above, you could go back and just sprinkle these in older code.

posted by admin at 9:06 AM  

Saturday, June 26, 2010

Python: Special Class Methods

I have really been trying to learn some Python fundamentals lately, reading some books and taking an online class. So: wow. I can’t believe that I have written so many tools, some used by really competent people at large companies, without really understanding polymorphism and other basic Python concepts.

Here’s an example of my sequence method from before, but making it a class using special class methods:

http://docs.python.org/reference/datamodel.html#specialnames
class imSequence:
	def __init__(self, file):
		dir = os.path.dirname(file)
		file = os.path.basename(file)
		segNum = re.findall(r'\d+', file)[-1]
		self.numPad = len(segNum)
		self.baseName = file.split(segNum)[0]
		self.fileType = file.split('.')[-1]
		globString = self.baseName
		for i in range(0,self.numPad): globString += '?'
		self.images = glob.glob(dir+'\\'+globString+file.split(segNum)[1])
 
	def __len__(self):
		return len(self.images)
 
	def __iter__(self):
		return iter(self.images)

Here’s an example of use:

seq = imSequence('seq\\test_00087.tga')
print len(seq)
>>94
print 'BaseName: %s  FileType: %s  Padding: %s' % (seq.baseName, seq.fileType, seq.numPad)
>>BaseName: test_  FileType: tga  Padding: 5
for image in seq: print image
>>seq\test_00000.tga
>>seq\test_00001.tga
>>seq\test_000002.tga
...

[More info and examples: Dive Into Python: Special Class Methods]

posted by admin at 10:16 PM  

Monday, May 3, 2010

Nikon D300 Stereo Rig [$30 DIY]

This is what the final product will look like. Two D300s, mounted as close as possible, sync’d metering, focus, flash, and shutter. Rig cost: Less than 30 dollars! Of course you are going to need two d300s and paired lenses, primes or zooms with a wide rubberband spanning them if you are really hardcore. Keep in mind, the intraoccular is 13.5cm, this is a tad more than double the normal human width, but it’s the best we can do with the d300 [horizontal].

Creating the Camera Bar

This mainly involves you going to the local hardware store with your D300 and looking at the different L-brackets available. It’s really critical that you get the cameras as close as possible, so mounting one upside down is preferable. It may look weird, but heck, it already looks weird; might as well go full retard.

I usually get an extra part for the buttons, because they will need to be somewhere that you can easily reach

Creating the Cabling

Nikon cables with Nikon 10-pin connectors aren’t cheap! The MC-22, MC-23, MC-25, or MC-30 are all over 60 dollars! I bought remote shutter cables at DealExtreme.com. I wanted to make my own switch, and also be able to use my GPS, and change the intraoccular, so the below describes that setup. If you just want to sync two identical cams, the fastest way is to buy a knock-off MC-23, which is the JJ MA-23 or JJ MR-23. I bought two JueYing RS-N1 remote shutters and cut them up. [$6 each]

I only labeled the pins most people would be interested in, for a more in depth pin-out that covers more than AE/AF and shutter (GPS, etc), have a look here. I decided to use molex connectors from RC cars, they make some good ones that are sealed/water-resistant and not too expensive.

So the cables have a pretty short lead. This so that I can connect them as single, double, have an intraoccular as wide or as short as any cable I make.. The next thing is to wire these to a set of AF/AE and shutter buttons.

Black focuses/meters and red is the shutter release. It’s not easy to find buttons that have two press states: half press and full press. If you see above, shutter is the combination of AF/AE, ground, and shutter. This is before the heat shrink is set in place.

Altogether

So that should be it. Here’s my first photo with sync’d metering, focus, flash, and shutter. They can even do bursts at high speed. Next post I will try to look into the software side, and take a look at lens distortion, vignetting, and other issues.

posted by admin at 12:27 AM  
Next Page »

Powered by WordPress