Stumbling Toward 'Awesomeness'

A Technical Art Blog

Wednesday, December 17, 2008

Kavan et al Have Done It!

Ladislav Kavan is presenting a paper entitled ‘Automatic Linearization of Nonlinear Skinning‘ at the 2009 Symposium on Interactive 3D Graphics and Games on skinning arbitrary deformations! Run over to his site and check it out. In my opinion, this is the holy grail of sorts. You rig any way you want, have complex deformation that can only solve at 1 frame an hour? No problem, bake a range of motion to pose-driven, procedurally placed, animated, and weighted joints. People, Kavan included, have been presenting papers in the past with systems somewhat like this, but nothing this polished and final. I have talked to him about this stuff in the past and it’s great to see the stuff he’s been working on and that it really is all I had hoped for!

This will change things.

posted by Chris at 12:16 PM  

Wednesday, September 17, 2008

Visualizing MRI Data in 3dsMax

Many of you might remember the fluoroscopic shoulder carriage videos I posted on my site about 4 years ago. I always wanted to do a sequence of MRI’s of the arm moving around. Thanks to Helena, an MRI tech that I met through someone, I did just that. I was able to get ~30 mins of idle time on the machine while on vacation.

The data that I got was basically image data. It’s slices along an axis, I wanted to visualize this data in 3D, but they did not have software to do this in the hospital. I really wanted to see the muscles and bones posed in three dimensional space as the arm went through different positions, so I decided to write some visualization tools myself in maxscript.

At left is a 512×512 MRI of my shoulder; arm raised (image downsampled to 256, animation on 5’s, ). The MRI data has some ‘wrap around’ artifacts because it was a somewhat small MRI (3 tesla) and I am a big guy, when things are close to the ‘wall’ they get these artifacts, and we wanted to see my arm. I am uploading the raw data for you to play with, you can download it from here: [data01] [data02]

Volumetric Pixels

Above is an example of 128×128 10 slice reconstruction with greyscale cubes.

I wrote a simple tool called ‘mriView’. I will explain how I created it below and you can download it and follow along if you want. [mriView]

The first thing I wanted to do was create ‘volumetric pixels’ or ‘voxels’ using the data. I decided to do this by going through all the images, culling what i didn’t want and creating grayscale cubes out of the rest. There was a great example in the maxscript docs called ‘How To … Access the Z-Depth channel’ which I picked some pieces from, it basically shows you how to efficiently read an image and generate 3d data from it.

But we first need to get the data into 3dsMax. I needed to load sequential images, and I decided the easiest way to do this was load AVI files. Here is an example of loading an AVI file, and treating it like a multi-part image (with comments):

on loadVideoBTN pressed do
     (
          --ask the user for an avi
          f = getOpenFileName caption:"Open An MRI Slice File:" filename:"c:/" types:"AVI(*.avi)|*.avi|MOV(*.mov)|*.mov|All|*.*|"
          mapLoc = f
          if f == undefined then (return undefined)
          else
          (
               map = openBitMap f
               --get the width and height of the video
               heightEDT2.text = map.height as string
               widthEDT2.text = map.width as string
               --gethow many frames the video has
               vidLBL.text = (map.numFrames as string + " slices loaded.")
               loadVideoBTN.text = getfilenamefile f
               imageLBL.text = ("Full Image Yeild: " + (map.height*map.width) as string + " voxels")
               slicesEDT2.text = map.numFrames as string
               threshEDT.text = "90"
          )
          updateImgProps()
     )

We now have the height in pixels, the width in pixels, and the number of slices. This is enough data to begin a simple reconstruction.

We will do so by visualizing the data with cubes, one cube per pixel that we want to display. However be careful, a simple 256×256 video is already possibly 65,536 cubes per slice! In the tool, you can see that I put in the original image values, but allow the user to crop out a specific area.

Below we go through each slice, then go row by row, looking pixel by pixel looking for ones that have a gray value above a threshold (what we want to see), when we find them, we make a box in 3d space:

height = 0.0
updateImgProps()
 
--this loop iterates through all slices (frames of video)
for frame = (slicesEDT1.text as integer) to (slicesEDT2.text as integer) do
(
     --seek to the frame of video that corresponds to the current slice
     map.frame = frame
     --loop that traverses y, which corresponds to the image height
     for y = mapHeight1 to mapHeight2 do
     (
          voxels = #()
          currentSlicePROG.value = (100.0 * y / totalHeight)
          --read a line of pixels
          pixels = getPixels map [0,y-1] totalWidth
          --loop that traverses x, the line of pixels across the width
          for x = 1 to totalWidth do
          (
               if (greyscale pixels[x]) < threshold then
               (
                    --if you are not a color we want to store: do nothing
               )
               --if you are a color we want, we will make a cube with your color in 3d space
               else
               (
                    b = box width:1 length:1 height:1 name:(uniqueName "voxel_")
                    b.pos = [x,-y,height]
                    b.wirecolor = color (greyscale pixels[x]) (greyscale pixels[x]) (greyscale pixels[x])
                    append voxels b
               )
          )
          --grabage collection is important on large datasets
          gc()
     )
     --increment the height to bump your cubes to the next slice
     height+=1
     progLBL.text = ("Slice " + (height as integer) as string + "/" + (totalSlices as integer) as string + " completed")
     slicePROG.value = (100.0 * (height/totalSlices))
)

Things really start to choke when you are using cubes, mainly because you are generating so many entities in the world. I added the option to merge all the cubes row by row, which sped things up, and helped memory, but this was still not really the visual fidelity I was hoping for…

Point Clouds and ‘MetaBalls’

I primarily wanted to generate meshes from the data so the next thing I tried was making a point cloud, then using that to generate a ‘BlobMesh’ (metaball) compound geometry type. In the example above, you see the head of my humerus and the tissue connected to it. Below is the code, it is almost simpler than boxes, it just takes finessing edit poly, i have only commented changes:

I make a plane and then delete all the verts to give me a ‘clean canvas’ of sorts, if anyone knows a better way of doing this, let me know:

p = convertToPoly(Plane lengthsegs:1 widthsegs:1)
p.name = "VoxelPoint_dataSet"
polyop.deleteVerts $VoxelPoint_dataSet #(1,2,3,4)

That and when we created a box before, we now create a point:

polyop.createVert $VoxelPoint_dataSet [x,-y,height]

This can get really time and resource intensive. As a result, I would let some of these go overnight. This was pretty frustrating, because it slowed the iteration time down a lot. And the blobMesh modifier was very slow as well.

Faking Volume with Transparent Planes


I was talking to Marco at work (Technical Director) and showing him some of my results, and he asked me why I didn’t just try to use transparent slices. I told him I had thought about it, but I really know nothing about the material system in 3dsMax, much less it’s maxscript exposure. He said that was a good reason to try it, and I agreed.

I started by making one material per slice, this worked well, but then I realized that 3dsMax has a limit of 24 materials. Instead of fixing this, they have added ‘multi-materials’, which can have n sub-materials. So I adjusted my script to use sub-materials:

--here we set the number of sub-materials to the number of slices
meditMaterials[matNum].materialList.count = totalSlices
--you also have to properly set the materialIDList
for m=1 to meditMaterials[matNum].materialList.count do
(
     meditMaterials[matNum].materialIDList[m] = m
)

Now we iterate through, generating the planes, assigning sub-materials to them with the correct frame of video for the corresponding slice:

p = plane name:("slice_" + frame as string) pos:[0,0,frame] width:totalWidth length:totalHeight
p.lengthsegs = 1
p.widthsegs = 1
p.material = meditMaterials[matNum][frame]
p.castShadows = off
p.receiveshadows = off
meditMaterials[matNum].materialList[frame].twoSided = on
meditMaterials[matNum].materialList[frame].selfIllumAmount = 100
meditMaterials[matNum].materialList[frame].diffuseMapEnable = on
newMap = meditMaterials[matNum].materialList[frame].diffuseMap = Bitmaptexture filename:mapLoc
newmap.starttime = frame
newmap.playBackRate = 1
newmap = meditMaterials[matNum].materialList[frame].opacityMap = Bitmaptexture fileName:mapLoc
newmap.starttime = frame
newmap.playBackRate = 1
showTextureMap p.material on
mat += 1

This was very surprising, it not only runs fast, but it looks great. Of course you are generating no geometry, but it is a great way to visualize the data. The below example is a 512×512 MRI of my shoulder (arm raised) rendered in realtime. The only problem I had was an alpha-test render error when viewed directly from the bottom, but this looks to bea 3dsMax issue.


I rendered the slices cycling from bottom to top. In one MRI the arm is raised, in the other, the arm lowered. The results are surprisingly decent. You can check that video out here. [shoulder_carriage_mri_xvid.avi]


You can also layer multiple slices together, above I have isolated the muscles and soft tissue from the skin, cartilage and bones. I did this by looking for pixels in certain luminance ranges. Above in the image I am ‘slicing’ away the white layer halfway down the torso, below you can see a video of this in realtime as I search for the humerus; this is a really fun & interesting way to view it:

Where to Go From here

I can now easily load up any of the MRI data I have and view it in 3d, though I would like to be able to better create meshes from specific parts of the data, in order to isolate muscles or bones. To do this I need to allow the user to ‘pick’ a color from part of the image, and then use this to isolate just those pixels and remesh just that part. I would also like to add something that allows you to slice through the planes from any axis. That shouldn’t be difficult, just will take more time.

posted by Chris at 3:48 PM  

Tuesday, July 29, 2008

3dsMax 2008 Node Event System: Does Not Exist

After writing a bit of code to leverage the new Node Event System, and then not be able to get it to work properly, I found a post by someone from Autodesk saying that it is not present in Max 2008, however it is in the documentation. This is somewhat frustrating, I hope this post saves you time and frustration.

posted by Chris at 2:30 PM  

Monday, July 28, 2008

Gleaning Data from the 3dsMax ‘Reaction Manager’

This is something we had been discussing over at CGTalk, we couldn’t find a way to figure out Reaction Manager links through maxscript. It just is not exposed. Reaction Manager is like Set Driven in Maya or a Relation Constraint in MotionBuilder. In order to sync rigging components between the packages, you need to be able to query these driven relationships.

I set about doing this by checking dependencies, and it turns out it is possible. It’s a headache, but it is possible!

The problem is that even though slave nodes have controllers with names like “Float_Reactor”, the master nodes have nothing that distinguishes them. I saw that if I got dependents on a master node (it’s controllers, specifically the one that drives the slave), that there was something called ‘ReferenceTarget:Reaction_Master‘:

refs.dependents $.position.controller
#(Controller:Position_Rotation_Scale, ReferenceTarget:Reaction_Master, Controller:Position_Reaction, ReferenceTarget:Reaction_Set, ReferenceTarget:Reaction_Manager, ReferenceTarget:ReferenceTarget, ReferenceTarget:Scene, Controller:Position_Rotation_Scale, $Box:box02 @ [58.426544,76.195091,0.000000], $Box:box01 @ [-42.007244,70.495964,0.000000], ReferenceTarget:NodeSelection, ReferenceTarget:ReferenceTarget, ReferenceTarget:ReferenceTarget)

This is actually a class, as you can see below:

exprForMAXObject (refs.dependents $.position.controller)[2]
"<<Reaction Master instance>>"
 
getclassname (refs.dependents $.position.controller)[2]
"Reaction Master"

So now we get the dependents of this ‘Reaction Master’, and it gives us the node that it is driving:

refs.dependentNodes (refs.dependents $.position.controller)[2]
#($Box:box02 @ [58.426544,76.195091,0.000000])

So here is a fn that gets Master information from a node:

fn getAllReactionMasterRefs obj =
(
	local nodeRef
	local ctrlRef
	for n = 1 to obj.numSubs do
	(
		ctrl = obj[n].controller
		if (ctrl!=undefined) then
		(
			for item in (refs.dependents ctrl) do
			(
				if item as string == "ReferenceTarget:Reaction_Master" then
				(
					nodeRef = (refs.dependentNodes item)
					ctrlRef = ctrl
				)
			)
			getAllReactionMasterRefs obj[n]
		)
	)
	return #(nodeRef, ctrlRef)
)

The node above returns:

getAllReactionMasterRefs $
#(#($Box:box02 @ [58.426544,76.195091,0.000000]), Controller:Position_Rotation_Scale)

The first item is an array of the referenced node, and the second is the controller that is driving *some* aspect of that node.

You now loop through this node looking for ‘Float_Reactor‘, ‘Point3_Reactor‘, etc, and then query them as stated in the manual (‘getReactionInfluence‘, ‘getReactionFalloff‘, etc) to figure out the relationship.

Here is an example function that prints out all reaction data for a slave node:

fn getAllReactionControllers obj =
(
	local list = #()
	for n = 1 to obj.numSubs do
	(
		ctrl = obj[n].controller
		if (ctrl!=undefined) then
		(
			--print (classof ctrl)
			if (classof ctrl) == Float_Reactor \
			or (classof ctrl) == Point3_Reactor \
			or (classof ctrl) == Position_Reactor \
			or (classof ctrl) == Rotation_Reactor \
			or (classof ctrl) == Scale_Reactor then
			(
				reactorDumper obj[n].controller data
			)
		)
		getAllReactionControllers obj[n]
	)
)

Here is the output from ‘getAllReactionControllers $Box2‘:

[Controller:Position_Reaction]
ReactionCount - 2
ReactionName - My Reaction
    ReactionFalloff - 1.0
    ReactionInfluence - 100.0
    ReactionStrength - 1.2
    ReactionState - [51.3844,-17.2801,0]
    ReactionValue - [-40.5492,-20,0]
ReactionName - State02
    ReactionFalloff - 2.0
    ReactionInfluence - 108.665
    ReactionStrength - 1.0
    ReactionState - [65.8385,174.579,0]
    ReactionValue - [-48.2522,167.132,0]

Conclusion
So, once again, no free lunch here. You can loop through the scene looking for Masters, then derive the slave nodes, then dump their info. It shouldn’t be too difficult as you can only have one Master, but if you have multiple reaction controllers in each node effecting the other; it could be a mess. I threw this together in a few minutes just to see if it was possible, not to hand out a polished, working implementation.

posted by Chris at 4:42 PM  

Saturday, June 21, 2008

Facial Stabilization in MotionBuilder using Python

Facial motion capture stabilization is basically where you isolate the movement of the face from the movement of the head. This sounds pretty simple, but it is actually a really difficult problem. In this post I will talk about the general process and give you an example facial stabilization python script.

Disclaimer: The script I have written here is loosely adapted from a MEL script in the book Mocap for Artists, and not something proprietary to Crytek. This is a great book for people of all experience levels, and has a chapter dedicated to facial mocap. Lastly, this script is not padded out or optimized.

To follow this you will need some facial mocap data, there is some freely downloadable here at www.mocap.lt. Grab the FBX file.

andy serkis - weta head stabilization halo

Stabilization markers

Get at least 3 markers on the actor that do not move when they move their face. These are called ’stabilization markers’ (STAB markers). You will use these markers to create a coordinate space for the head, so it is important that they not move. STAB markers are commonly found on the left and right temple, and nose bridge. Using a headband and creating virtual markers from multiple solid left/right markers works even better. Headbands move, it’s good to keep this in mind, above you see a special headrig used on Kong to create stable markers.

It is a good idea to write some tools to help you out here. At work I have written tools to parse a performance and tell me the most stable markers at any given time, if you have this data, you can also blend between them.

Load up the facial mocap file you have downloaded, it should look something like this:

In the data we have, you can delete the root, the headband markers, as well as 1-RTMPL, 1-LTMPL, and 1-MNOSE could all be considered STAB markers.

General Pipeline

As you can see, mocap data is just a bunch of translating points. So what we want to do is create a new coordinate system that has the motion of the head, and then use this to isolate the facial movement.

This will take some processing, and also an interactive user interface. You may have seen my tutorial on Creating Interactive MotionBuilder User Interface Tools. You should familiarize yourself with that because this will build on it. Below is the basic idea:

You create a library ‘myLib’ that you load into motionbuilder’s python environment. This is what does the heavy lifting, I say this because you don’t want to do things like send the position of every marker, every frame to your external app via telnet. I also load pyEuclid, a great vector library, because I didn’t feel like writing my own vector class. (MBuilder has no vector class)

Creating ‘myLib’

So we will now create our own library that sits inside MBuilder, this will essentially be a ‘toolkit’ that we communicate with from the outside. Your ‘myLib’ can be called anything, but this should be the place you store functions that do the real processing jobs, you will feed into to them from the outside UI later. The first thing you will need inside the MB python environment is something to cast FBVector3D types into pyEuclid. This is fairly simple:

#casts point3 strings to pyEuclid vectors
def vec3(point3):
	return Vector3(point3[0], point3[1], point3[2])
 
#casts a pyEuclid vector to FBVector3d
def fbv(point3):
	return FBVector3d(point3.x, point3.y, point3.z)

Next is something that will return an FBModelList of models from an array of names, this is important later when we want to feed in model lists from our external app:

#returns an array of models when given an array of model names
#useful with external apps/telnetlib ui
def modelsFromStrings(modelNames):
	output = []
	for name in modelNames:
		output.append(FBFindModelByName(name))
	return output

Now, if you were to take these snippets and save them as a file called myLib.py in your MBuilder directory tree (MotionBuilder75 Ext2\bin\x64\python\lib), you can load them into the MBuilder environment. (You should have also placed pyEuclid here)

casting fbvectors to pyeuclid

It’s always good to mock-up code in telnet because, unlike the python console in MBuilder, it supports copy/paste etc..

In the image above, I get the position of a model in MBuilder, it returns as a FBVector3D, I then import myLib and pyEuclid and use our function above to ‘cast’ the FBVector3d to a pyEuclid vector. It can now be added, subtracted, multiplied, and more; all things that are not possible with the default MBuilder python tools. Our other function ‘fbv()‘ casts pyEuclid vectors back to FBVector3d, so that MBuilder can read them.

So we can now do vector math in motionbuilder! Next we will add some code to our ‘myLib’ that stabilizes the face.

Adding Stabilization-Specific Code to ‘myLib’

One thing we will need to do a lot is generate ‘virtual markers’ from the existing markers. To do this, we need a function that returns the average position of however many vectors (marker positions) it is fed.

#returns average position of an FBModelList as FBVector3d
def avgPos(models):
	mLen = len(models)
	if mLen == 1:
		return models[0].Translation
	total = vec3(models[0].Translation)
	for i in range (1, mLen):
		total += vec3(models[i].Translation)
	avgTranslation = total/mLen
	return fbv(avgTranslation)

Here is an example of avgPos() in use:

Now onto the stabilization code:

#stabilizes face markers, input 4 FBModelList arrays, leaveOrig  for leaving original markers
def stab(right,left,center,markers,leaveOrig):
 
	pMatrix = FBMatrix()
	lSystem=FBSystem()
	lScene = lSystem.Scene
	newMarkers = []
 
	def faceOrient():
		lScene.Evaluate()
 
		Rpos = vec3(avgPos(right))
		Lpos = vec3(avgPos(left))
		Cpos = vec3(avgPos(center))
 
		#build the coordinate system of the head
		faceAttach.GetMatrix(pMatrix)
		xVec = (Cpos - Rpos)
		xVec = xVec.normalize()
		zVec = ((Cpos - vec3(faceAttach.Translation)).normalize()).cross(xVec)
		zVec = zVec.normalize()
		yVec = xVec.cross(zVec)
		yVec = yVec.normalize()
		facePos = (Rpos + Lpos)/2
 
		pMatrix[0] = xVec.x
		pMatrix[1] = xVec.y
		pMatrix[2] = xVec.z
 
		pMatrix[4] = yVec.x
		pMatrix[5] = yVec.y
		pMatrix[6] = yVec.z
 
		pMatrix[8] = zVec.x
		pMatrix[9] = zVec.y
		pMatrix[10] = zVec.z
 
		pMatrix[12] = facePos.x
		pMatrix[13] = facePos.y
		pMatrix[14] = facePos.z
 
		faceAttach.SetMatrix(pMatrix,FBModelTransformationMatrix.kModelTransformation,True)
		lScene.Evaluate()
 
	#keys the translation and rotation of an animNodeList
	def keyTransRot(animNodeList):
		for lNode in animNodeList:
			if (lNode.Name == 'Lcl Translation'):
				lNode.KeyCandidate()
			if (lNode.Name == 'Lcl Rotation'):
				lNode.KeyCandidate()
 
	Rpos = vec3(avgPos(right))
	Lpos = vec3(avgPos(left))
	Cpos = vec3(avgPos(center))
 
	#create a null that will visualize the head coordsys, then position and orient it
	faceAttach = FBModelNull("faceAttach")
	faceAttach.Show = True
	faceAttach.Translation = fbv((Rpos + Lpos)/2)
	faceOrient()
 
	#create new set of stabilized nulls, non-destructive, this should be tied to 'leaveOrig' later
	for obj in markers:
		new = FBModelNull(obj.Name + '_stab')
		newTran = vec3(obj.Translation)
		new.Translation = fbv(newTran)
		new.Show = True
		new.Size = 20
		new.Parent = faceAttach
		newMarkers.append(new)
 
	lPlayerControl = FBPlayerControl()
	lPlayerControl.GotoStart()
	FStart = int(lPlayerControl.ZoomWindowStart.GetFrame(True))
	FStop = int(lPlayerControl.ZoomWindowStop.GetFrame(True))
 
	animNodes = faceAttach.AnimationNode.Nodes
 
	for frame in range(FStart,FStop):
 
		#build proper head coordsys
		faceOrient()
 
		#update stabilized markers and key them
		for m in range (0,len(newMarkers)):
			markerAnimNodes = newMarkers[m].AnimationNode.Nodes
			newMarkers[m].SetVector(markers[m].Translation.Data)
			lScene.Evaluate()
			keyTransRot(markerAnimNodes)
 
		keyTransRot(animNodes)
 
		lPlayerControl.StepForward()

We feed our ‘stab function FBModelLists of right, left, and center stabilization markers, it creates virtual markers from these groups. Then ‘markers’ is all the markers to be stabilized. ‘leavrOrig’ is an option I usually add, this allows for non-destructive use, I have just made the fn leave original in this example, as I favor this, so this option does nothing, but you could add it. With the original markers left, you can immediately see if there was an error in your script. (new motion should match orig)

Creating an External UI that Uses ‘myLib’

Earlier I mentioned Creating Interactive MotionBuilder User Interface Tools, where I explain how to screenscrape/use the telnet Python Remote Server to create an interactive external UI that floats as a window in MotionBuilder itself. I also use the libraries mentioned in the above article.

The code for the facial stabilization UI I have created is here: [stab_ui.py]

I will now step through code snippets pertaining to our facial STAB tool:

def getSelection():
	selectedItems = []
	mbPipe("selectedModels = FBModelList()")
	mbPipe("FBGetSelectedModels(selectedModels,None,True)")
	for item in (mbPipe("for item in selectedModels: print item.Name")):
		selectedItems.append(item)
	return selectedItems

stab uiThis returns a list of strings that are the currently selected models in MBuilder. This is the main thing that our external UI does. The person needs to interactively choose the right, left, and center markers, then all the markers that will be stabilized.

At the left here you see what the UI looks like. To add some feedback to the buttons, you can make them change to reflect that the user has selected markers. We do so by changing the button text.

Example:

def rStabClick(self,event):
	self.rStabMarkers = getSelection()
	print str(self.rStabMarkers)
	self.rStab.Label = (str(len(self.rStabMarkers)) + " Right Markers")

This also stores all the markers the user has chosen into the variable ‘rStabMarkers‘. Once we have all the markers the user has chosen, we need to send them to ‘myLib‘ in MBuilder so that it can run our ‘stab‘ function on them. This will happen when they click ‘Stabilize Markerset‘.

def stabilizeClick(self,event):
	mbPipe('from euclid import *')
	mbPipe('from myLib import *')
	mbPipe('rStab = modelsFromStrings(' + str(self.rStabMarkers) + ')')
	mbPipe('lStab = modelsFromStrings(' + str(self.lStabMarkers) + ')')
	mbPipe('cStab = modelsFromStrings(' + str(self.cStabMarkers) + ')')
	mbPipe('markerset = modelsFromStrings(' + str(self.mSetMarkers) + ')')
	mbPipe('stab(rStab,lStab,cStab,markerset,False)')

Above we now use ‘modelsFromStrings‘ to feed ‘myLib’ the names of selected models. When you run this on thousands of frames, it will actually hang for up to a minute or two while it does all the processing. I discuss optimizations below. Here is a video of what you should have when stabilization is complete:


Kill the keyframes on the root (faceAttach) to remove head motion

Conclusion: Debugging/Optimization

Remember: Your stabilization will only be as good as your STAB markers. It really pays off to create tools to check marker stability.

Sometimes the terminal/screen scraping runs into issues. The mbPipe function can be padded out a lot and made more robust, this here was just an example. If you look at the external python console, you can see exactly what mbPipe is sending to MBuilder, and what it is receiving back through the terminal:

Sending&gt;&gt;&gt; selectedModels = FBModelList()
Sending&gt;&gt;&gt; FBGetSelectedModels(selectedModels,None,True)
Sending&gt;&gt;&gt; for item in selectedModels: print item.Name
['Subject 1-RH1', 'Subject 1-RTMPL']

All of the above can be padded out and optimized. For instance, you could try to do everything without a single lPlayerControl.StepForward() or lScene.Evaluate(), but this takes a lot of MotionBuilder/programming knowhow; it involves only using the keyframe data to generate your matrices, positions etc, and never querying a model.

posted by Chris at 10:10 PM  

Tuesday, June 10, 2008

Poor Man’s Mocap

This year my friend Judd gave a talk entitled Uncharted Animation: An In-depth Look at the Character Animation Workflow and Pipeline. In the talk, he showed what they call ‘poor man’s mocap’, where the animator can load up a sequence of frames and they are sync’d with the timeline in Maya. So as someone scrubs an animation, it scrubs the frames of the video. I have duplicated this in a small maxscript available in cryTools. You can grab it in the [Tutorials/Files] section.

posted by Chris at 2:24 PM  

Wednesday, May 14, 2008

FMX 2008 Slides Up

At FMX I gave a talk with my friend/coworker Mathias entitled: Crysis Management: Maxscript Tools Development at Crytek. The first part of this lecture is an introductory course on maxscript, showing many tips and tricks for beginners, the rest serves as an overview of maxscript tools development at Crytek, focusing on the Animation tools. I am very thankful that we were given the ability to do this, because I feel there are no real maxscript ‘cookbooks’ out there. [Tutorials/Slides Section]

posted by Chris at 11:23 AM  

Monday, May 5, 2008

GeSHi Maxscript Syntax Highlighting

I have written a GeSHi syntax file for maxscript. This allows anything that uses GeSHi, like WP-Syntax, to properly highlight maxscript syntax. As an example, this below is the maxscript version of the MBuilder code to print out a position every frame.

fn timeSink obj =
(
	for i = animationrange.start to animationrange.end do
	(
		at time i (print obj.position)
	)
)
 
start = timeStamp()
 
timeSink (selection as array)[1]
 
end = timeStamp()
format "Processing took % seconds\n" ((end - start) / 1000.0)

You can grab my syntax file [here]

posted by Chris at 3:17 AM  

Powered by WordPress