I have been parsing through the files of other people a lot lately, and finally took the time to make a little function to give me general information about a sequence of files. It uses regex to yank the numeric parts out of a filename, figure out the padding, and glob to tell you how many files in the sequence. Here’s the code and an example usage:
#returns [base name, padding, filetype, number of files, first file, last file]def getSeqInfo(file):
dir=os.path.dirname(file)file=os.path.basename(file)
segNum =re.findall(r'\d+',file)[-1]
numPad =len(segNum)
baseName =file.split(segNum)[0]
fileType =file.split('.')[-1]
globString = baseName
for i inrange(0,numPad): globString +='?'
theGlob =glob.glob(dir+'\\'+globString+file.split(segNum)[1])
numFrames =len(theGlob)
firstFrame = theGlob[0]
lastFrame = theGlob[-1]return[baseName, numPad, fileType, numFrames, firstFrame, lastFrame]
#returns [base name, padding, filetype, number of files, first file, last file]
def getSeqInfo(file):
dir = os.path.dirname(file)
file = os.path.basename(file)
segNum = re.findall(r'\d+', file)[-1]
numPad = len(segNum)
baseName = file.split(segNum)[0]
fileType = file.split('.')[-1]
globString = baseName
for i in range(0,numPad): globString += '?'
theGlob = glob.glob(dir+'\\'+globString+file.split(segNum)[1])
numFrames = len(theGlob)
firstFrame = theGlob[0]
lastFrame = theGlob[-1]
return [baseName, numPad, fileType, numFrames, firstFrame, lastFrame]
I know this is pretty simple, but I looked around a bit online and didn’t see anything readily available showing how to deal with different numbered file sets. I have needed something like this for a while that will work with anything from OBJs sent from external contractors, to images from After Effects…
So I have always been wondering how you can create almost like a ‘droplet’ to steal the photoshop lingo, from a python script. A while ago I came across some sites showing how to edit shellex in regedit to allow for files to be dropped on any python script and fed to it as args (Windows).
It’s really simple, you grab this reg file [py_drag_n_drop.reg] and install it.
Now when you drop files onto a python script, their filenames will be passed as args, here’s a simple script to test.
importsys
f =open('c:\\tmp.txt','w')for arg insys.argv:
f.write(arg + '\n')
f.close()
import sys
f = open('c:\\tmp.txt', 'w')
for arg in sys.argv:
f.write(arg + '\n')
f.close()
When you save this, and drop files onto its icon, it will create tmp.txt, which will look like this:
The script itself is the first arg, then all the files. This way you can easily create scripts that accept drops to do things like convert files, upload files, etc..
I have been really amazing myself at how much knowledge I have forgotten in the past five or six months… Most of the work I did in the past year utilized the UIC module to load UI files directly, but I can find very little information about this online. I was surprised to see that even the trusty old Rapid GUI Programming with Python and Qt book doesn’t cover loading UI files with the UIC module.
So, here is a tiny script with UI file [download] that will generate a pyqt example window that does ‘stuff’:
importsysfrom PyQt4 import QtGui, QtCore, uic
class TestApp(QtGui.QMainWindow):
def__init__(self):
QtGui.QMainWindow.__init__(self)self.ui= uic.loadUi('X:/projects/2010/python/pyqt_tutorial/pyqt_tutorial.ui')self.ui.show()self.connect(self.ui.doubleSpinBox, QtCore.SIGNAL("valueChanged(double)"), spinFn)self.connect(self.ui.comboBox, QtCore.SIGNAL("currentIndexChanged(QString)"), comboFn)self.connect(self.ui.pushButton, QtCore.SIGNAL("clicked()"), buttonFn)def spinFn(value):
win.ui.doubleSpinBoxLabel.setText('doubleSpinBox is set to ' + str(value))def buttonFn():
win.ui.setWindowTitle(win.ui.lineEdit.text())def comboFn(value):
win.ui.comboBoxLabel.setText(str(value) + ' is selected')if __name__ =="__main__":
app = QtGui.QApplication(sys.argv)
win = TestApp()sys.exit(app.exec_())
import sys
from PyQt4 import QtGui, QtCore, uic
class TestApp(QtGui.QMainWindow):
def __init__(self):
QtGui.QMainWindow.__init__(self)
self.ui = uic.loadUi('X:/projects/2010/python/pyqt_tutorial/pyqt_tutorial.ui')
self.ui.show()
self.connect(self.ui.doubleSpinBox, QtCore.SIGNAL("valueChanged(double)"), spinFn)
self.connect(self.ui.comboBox, QtCore.SIGNAL("currentIndexChanged(QString)"), comboFn)
self.connect(self.ui.pushButton, QtCore.SIGNAL("clicked()"), buttonFn)
def spinFn(value):
win.ui.doubleSpinBoxLabel.setText('doubleSpinBox is set to ' + str(value))
def buttonFn():
win.ui.setWindowTitle(win.ui.lineEdit.text())
def comboFn(value):
win.ui.comboBoxLabel.setText(str(value) + ' is selected')
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
win = TestApp()
sys.exit(app.exec_())
Change the path to reflect where you have saved the UI file, and when you run the script you should get this:
EDIT: A few people have asked me to update this for other situations
PySide Inside Maya:
importsysfrom PySide.QtUiToolsimport *
from PySide.QtCoreimport *
from PySide.QtGuiimport *
class TestApp(QMainWindow):
def__init__(self):
QMainWindow.__init__(self)
loader = QUiLoader()self.ui= loader.load('c:/pyqt_tutorial.ui')self.ui.show()self.connect(self.ui.doubleSpinBox, SIGNAL("valueChanged(double)"), spinFn)self.connect(self.ui.comboBox, SIGNAL("currentIndexChanged(QString)"), comboFn)self.connect(self.ui.pushButton, SIGNAL("clicked()"), buttonFn)def spinFn(value):
win.ui.doubleSpinBoxLabel.setText('doubleSpinBox is set to ' + str(value))def buttonFn():
win.ui.setWindowTitle(win.ui.lineEdit.text())def comboFn(value):
win.ui.comboBoxLabel.setText(str(value) + ' is selected')
win = TestApp()
import sys
from PySide.QtUiTools import *
from PySide.QtCore import *
from PySide.QtGui import *
class TestApp(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
loader = QUiLoader()
self.ui = loader.load('c:/pyqt_tutorial.ui')
self.ui.show()
self.connect(self.ui.doubleSpinBox, SIGNAL("valueChanged(double)"), spinFn)
self.connect(self.ui.comboBox, SIGNAL("currentIndexChanged(QString)"), comboFn)
self.connect(self.ui.pushButton, SIGNAL("clicked()"), buttonFn)
def spinFn(value):
win.ui.doubleSpinBoxLabel.setText('doubleSpinBox is set to ' + str(value))
def buttonFn():
win.ui.setWindowTitle(win.ui.lineEdit.text())
def comboFn(value):
win.ui.comboBoxLabel.setText(str(value) + ' is selected')
win = TestApp()
PyQT Inside Maya:
importsysfrom PyQt4 import QtGui, QtCore, uic
class TestApp(QtGui.QMainWindow):
def__init__(self):
QtGui.QMainWindow.__init__(self)self.ui= uic.loadUi('c:/pyqt_tutorial.ui')self.ui.show()self.connect(self.ui.doubleSpinBox, QtCore.SIGNAL("valueChanged(double)"), spinFn)self.connect(self.ui.comboBox, QtCore.SIGNAL("currentIndexChanged(QString)"), comboFn)self.connect(self.ui.pushButton, QtCore.SIGNAL("clicked()"), buttonFn)def spinFn(value):
win.ui.doubleSpinBoxLabel.setText('doubleSpinBox is set to ' + str(value))def buttonFn():
win.ui.setWindowTitle(win.ui.lineEdit.text())def comboFn(value):
win.ui.comboBoxLabel.setText(str(value) + ' is selected')
win = TestApp()
import sys
from PyQt4 import QtGui, QtCore, uic
class TestApp(QtGui.QMainWindow):
def __init__(self):
QtGui.QMainWindow.__init__(self)
self.ui = uic.loadUi('c:/pyqt_tutorial.ui')
self.ui.show()
self.connect(self.ui.doubleSpinBox, QtCore.SIGNAL("valueChanged(double)"), spinFn)
self.connect(self.ui.comboBox, QtCore.SIGNAL("currentIndexChanged(QString)"), comboFn)
self.connect(self.ui.pushButton, QtCore.SIGNAL("clicked()"), buttonFn)
def spinFn(value):
win.ui.doubleSpinBoxLabel.setText('doubleSpinBox is set to ' + str(value))
def buttonFn():
win.ui.setWindowTitle(win.ui.lineEdit.text())
def comboFn(value):
win.ui.comboBoxLabel.setText(str(value) + ' is selected')
win = TestApp()
The Vatican is not very open with it’s art, the reason they scream ‘NO PHOTO’ when you pull a camera out in the chapel is that they sold the ability to take photos of it to a Japanese TV Station (Nippon TV) for 4.2 million dollars. Because the ceiling has long been in the public domain, the only way they can sell ‘the right to photograph’ the ceiling is by screwing over us tourists who visit. If you take a photo, they have no control over that image –because they don’t own the copyright of the work.
Many of you who know me, know I am a huge fan of Michelangelo’s work, this data was just too awesomely tempting and when I saw it posted publicly online, I really wanted to get my hands on the original assets.
Here is a python script to grab all of the image tiles that the flash app reads, and then generate the 8k faces of the cubemap. In the end you will have a 32,000 pixel cubemap.
First we copy the swatches from the website:
def getSistineCubemap(saveLoc):
importurllib#define the faces of the cubemap, using their own lettering scheme
faces =['f','b','u','d','l','r']#location of the images
url ='http://www.vatican.va/various/cappelle/sistina_vr/Sistine-Chapel.tiles/l3_'#copy all the swatches to your local drivefor face in faces:
for x inrange(1,9):
for y inrange(1,9):
file=(face + '_' + str(y) + '_' + str(x) + '.jpg')urllib.urlretrieve((url + face + '_' + str(y) + '_' + str(x) + '.jpg'),(saveLoc + file))urllib.urlcleanup()print"saved " + file
def getSistineCubemap(saveLoc):
import urllib
#define the faces of the cubemap, using their own lettering scheme
faces = ['f','b','u','d','l','r']
#location of the images
url = 'http://www.vatican.va/various/cappelle/sistina_vr/Sistine-Chapel.tiles/l3_'
#copy all the swatches to your local drive
for face in faces:
for x in range(1,9):
for y in range(1,9):
file = (face + '_' + str(y) + '_' + str(x) + '.jpg')
urllib.urlretrieve((url + face + '_' + str(y) + '_' + str(x) + '.jpg'), (saveLoc + file))
urllib.urlcleanup()
print "saved " + file
Many, many people are having weird, buggy camera issues where you rotate a view and it snaps back to the pre-tumbled state (view does not update properly). There are posts all over, and Autodesk’s official response is “Consumer gaming videocards are not supported”. Really? That’s basically saying: All consumer video cards, gaming or not, are unsupported. I have had this issue on my laptop, which is surely not a ‘gaming’ machine. Autodesk says the ‘fix’ is to upgrade to an expensive pro-level video card. But what they maybe would tell you if they weren’t partnered with nVidia is: It’s an easy fix!
Find your Maya ENV file:
C:\Documents and Settings\Administrator\My Documents\maya\2009-x64\Maya.env
And add this environment variable to it:
MAYA_GEFORCE_SKIP_OVERLAY=1
Autodesk buried this information in their Maya 2009 Late Breaking Release Notes, and it fixes the issue completely! However, even on their official forum, Autodesk employees and moderators reply to these draw errors as follows:
Maya 2009 was tested with a finite number of graphics cards from ATI and Nvidia, with drivers from each vendor that provided the best performance, with the least amount of issues. (at the time of product launch). A list of officially qualified hardware can be found here: http://www.autodesk.com/maya-hardware. Maya is not qualified/supported on consumer gaming cards. Geforce card users can expect to have issues. This is clearly stated in the official qualification charts mentioned above.
Many of you might remember the fluoroscopic shoulder carriage videos I posted on my site about 4 years ago. I always wanted to do a sequence of MRI’s of the arm moving around. Thanks to Helena, an MRI tech that I met through someone, I did just that. I was able to get ~30 mins of idle time on the machine while on vacation.
The data that I got was basically image data. It’s slices along an axis, I wanted to visualize this data in 3D, but they did not have software to do this in the hospital. I really wanted to see the muscles and bones posed in three dimensional space as the arm went through different positions, so I decided to write some visualization tools myself in maxscript.
At left is a 512×512 MRI of my shoulder; arm raised (image downsampled to 256, animation on 5’s, ). The MRI data has some ‘wrap around’ artifacts because it was a somewhat small MRI (3 tesla) and I am a big guy, when things are close to the ‘wall’ they get these artifacts, and we wanted to see my arm. I am uploading the raw data for you to play with, you can download it from here: [data01] [data02]
Volumetric Pixels
Above is an example of 128×128 10 slice reconstruction with greyscale cubes.
I wrote a simple tool called ‘mriView’. I will explain how I created it below and you can download it and follow along if you want. [mriView]
The first thing I wanted to do was create ‘volumetric pixels’ or ‘voxels’ using the data. I decided to do this by going through all the images, culling what i didn’t want and creating grayscale cubes out of the rest. There was a great example in the maxscript docs called ‘How To … Access the Z-Depth channel’ which I picked some pieces from, it basically shows you how to efficiently read an image and generate 3d data from it.
But we first need to get the data into 3dsMax. I needed to load sequential images, and I decided the easiest way to do this was load AVI files. Here is an example of loading an AVI file, and treating it like a multi-part image (with comments):
on loadVideoBTN pressed do
(
--ask the user for an avi
f = getOpenFileName caption:"Open An MRI Slice File:" filename:"c:/" types:"AVI(*.avi)|*.avi|MOV(*.mov)|*.mov|All|*.*|"
mapLoc = f
if f == undefined then (return undefined)
else
(
map = openBitMap f
--get the width and height of the video
heightEDT2.text = map.height as string
widthEDT2.text = map.width as string
--gethow many frames the video has
vidLBL.text = (map.numFrames as string + " slices loaded.")
loadVideoBTN.text = getfilenamefile f
imageLBL.text = ("Full Image Yeild: " + (map.height*map.width) as string + " voxels")
slicesEDT2.text = map.numFrames as string
threshEDT.text = "90"
)
updateImgProps()
)
on loadVideoBTN pressed do
(
--ask the user for an avi
f = getOpenFileName caption:"Open An MRI Slice File:" filename:"c:/" types:"AVI(*.avi)|*.avi|MOV(*.mov)|*.mov|All|*.*|"
mapLoc = f
if f == undefined then (return undefined)
else
(
map = openBitMap f
--get the width and height of the video
heightEDT2.text = map.height as string
widthEDT2.text = map.width as string
--gethow many frames the video has
vidLBL.text = (map.numFrames as string + " slices loaded.")
loadVideoBTN.text = getfilenamefile f
imageLBL.text = ("Full Image Yeild: " + (map.height*map.width) as string + " voxels")
slicesEDT2.text = map.numFrames as string
threshEDT.text = "90"
)
updateImgProps()
)
We now have the height in pixels, the width in pixels, and the number of slices. This is enough data to begin a simple reconstruction.
We will do so by visualizing the data with cubes, one cube per pixel that we want to display. However be careful, a simple 256×256 video is already possibly 65,536 cubes per slice! In the tool, you can see that I put in the original image values, but allow the user to crop out a specific area.
Below we go through each slice, then go row by row, looking pixel by pixel looking for ones that have a gray value above a threshold (what we want to see), when we find them, we make a box in 3d space:
height = 0.0
updateImgProps()
--this loop iterates through all slices (frames of video)
for frame = (slicesEDT1.text as integer) to (slicesEDT2.text as integer) do
(
--seek to the frame of video that corresponds to the current slice
map.frame = frame
--loop that traverses y, which corresponds to the image height
for y = mapHeight1 to mapHeight2 do
(
voxels = #()
currentSlicePROG.value = (100.0 * y / totalHeight)
--read a line of pixels
pixels = getPixels map [0,y-1] totalWidth
--loop that traverses x, the line of pixels across the width
for x = 1 to totalWidth do
(
if (greyscale pixels[x]) < threshold then
(
--if you are not a color we want to store: do nothing
)
--if you are a color we want, we will make a cube with your color in 3d space
else
(
b = box width:1 length:1 height:1 name:(uniqueName "voxel_")
b.pos = [x,-y,height]
b.wirecolor = color (greyscale pixels[x]) (greyscale pixels[x]) (greyscale pixels[x])
append voxels b
)
)
--grabage collection is important on large datasets
gc()
)
--increment the height to bump your cubes to the next slice
height+=1
progLBL.text = ("Slice " + (height as integer) as string + "/" + (totalSlices as integer) as string + " completed")
slicePROG.value = (100.0 * (height/totalSlices))
)
height = 0.0
updateImgProps()
--this loop iterates through all slices (frames of video)
for frame = (slicesEDT1.text as integer) to (slicesEDT2.text as integer) do
(
--seek to the frame of video that corresponds to the current slice
map.frame = frame
--loop that traverses y, which corresponds to the image height
for y = mapHeight1 to mapHeight2 do
(
voxels = #()
currentSlicePROG.value = (100.0 * y / totalHeight)
--read a line of pixels
pixels = getPixels map [0,y-1] totalWidth
--loop that traverses x, the line of pixels across the width
for x = 1 to totalWidth do
(
if (greyscale pixels[x]) < threshold then
(
--if you are not a color we want to store: do nothing
)
--if you are a color we want, we will make a cube with your color in 3d space
else
(
b = box width:1 length:1 height:1 name:(uniqueName "voxel_")
b.pos = [x,-y,height]
b.wirecolor = color (greyscale pixels[x]) (greyscale pixels[x]) (greyscale pixels[x])
append voxels b
)
)
--grabage collection is important on large datasets
gc()
)
--increment the height to bump your cubes to the next slice
height+=1
progLBL.text = ("Slice " + (height as integer) as string + "/" + (totalSlices as integer) as string + " completed")
slicePROG.value = (100.0 * (height/totalSlices))
)
Things really start to choke when you are using cubes, mainly because you are generating so many entities in the world. I added the option to merge all the cubes row by row, which sped things up, and helped memory, but this was still not really the visual fidelity I was hoping for…
Point Clouds and ‘MetaBalls’
I primarily wanted to generate meshes from the data so the next thing I tried was making a point cloud, then using that to generate a ‘BlobMesh’ (metaball) compound geometry type. In the example above, you see the head of my humerus and the tissue connected to it. Below is the code, it is almost simpler than boxes, it just takes finessing edit poly, i have only commented changes:
I make a plane and then delete all the verts to give me a ‘clean canvas’ of sorts, if anyone knows a better way of doing this, let me know:
This can get really time and resource intensive. As a result, I would let some of these go overnight. This was pretty frustrating, because it slowed the iteration time down a lot. And the blobMesh modifier was very slow as well.
Faking Volume with Transparent Planes
I was talking to Marco at work (Technical Director) and showing him some of my results, and he asked me why I didn’t just try to use transparent slices. I told him I had thought about it, but I really know nothing about the material system in 3dsMax, much less it’s maxscript exposure. He said that was a good reason to try it, and I agreed.
I started by making one material per slice, this worked well, but then I realized that 3dsMax has a limit of 24 materials. Instead of fixing this, they have added ‘multi-materials’, which can have n sub-materials. So I adjusted my script to use sub-materials:
--here we set the number of sub-materials to the number of slices
meditMaterials[matNum].materialList.count = totalSlices
--you also have to properly set the materialIDList
for m=1 to meditMaterials[matNum].materialList.count do
(
meditMaterials[matNum].materialIDList[m] = m
)
--here we set the number of sub-materials to the number of slices
meditMaterials[matNum].materialList.count = totalSlices
--you also have to properly set the materialIDList
for m=1 to meditMaterials[matNum].materialList.count do
(
meditMaterials[matNum].materialIDList[m] = m
)
Now we iterate through, generating the planes, assigning sub-materials to them with the correct frame of video for the corresponding slice:
p = plane name:("slice_" + frame as string) pos:[0,0,frame] width:totalWidth length:totalHeight
p.lengthsegs = 1
p.widthsegs = 1
p.material = meditMaterials[matNum][frame]
p.castShadows = off
p.receiveshadows = off
meditMaterials[matNum].materialList[frame].twoSided = on
meditMaterials[matNum].materialList[frame].selfIllumAmount = 100
meditMaterials[matNum].materialList[frame].diffuseMapEnable = on
newMap = meditMaterials[matNum].materialList[frame].diffuseMap = Bitmaptexture filename:mapLoc
newmap.starttime = frame
newmap.playBackRate = 1
newmap = meditMaterials[matNum].materialList[frame].opacityMap = Bitmaptexture fileName:mapLoc
newmap.starttime = frame
newmap.playBackRate = 1
showTextureMap p.material on
mat += 1
p = plane name:("slice_" + frame as string) pos:[0,0,frame] width:totalWidth length:totalHeight
p.lengthsegs = 1
p.widthsegs = 1
p.material = meditMaterials[matNum][frame]
p.castShadows = off
p.receiveshadows = off
meditMaterials[matNum].materialList[frame].twoSided = on
meditMaterials[matNum].materialList[frame].selfIllumAmount = 100
meditMaterials[matNum].materialList[frame].diffuseMapEnable = on
newMap = meditMaterials[matNum].materialList[frame].diffuseMap = Bitmaptexture filename:mapLoc
newmap.starttime = frame
newmap.playBackRate = 1
newmap = meditMaterials[matNum].materialList[frame].opacityMap = Bitmaptexture fileName:mapLoc
newmap.starttime = frame
newmap.playBackRate = 1
showTextureMap p.material on
mat += 1
This was very surprising, it not only runs fast, but it looks great. Of course you are generating no geometry, but it is a great way to visualize the data. The below example is a 512×512 MRI of my shoulder (arm raised) rendered in realtime. The only problem I had was an alpha-test render error when viewed directly from the bottom, but this looks to bea 3dsMax issue.
I rendered the slices cycling from bottom to top. In one MRI the arm is raised, in the other, the arm lowered. The results are surprisingly decent. You can check that video out here. [shoulder_carriage_mri_xvid.avi]
You can also layer multiple slices together, above I have isolated the muscles and soft tissue from the skin, cartilage and bones. I did this by looking for pixels in certain luminance ranges. Above in the image I am ‘slicing’ away the white layer halfway down the torso, below you can see a video of this in realtime as I search for the humerus; this is a really fun & interesting way to view it:
I can now easily load up any of the MRI data I have and view it in 3d, though I would like to be able to better create meshes from specific parts of the data, in order to isolate muscles or bones. To do this I need to allow the user to ‘pick’ a color from part of the image, and then use this to isolate just those pixels and remesh just that part. I would also like to add something that allows you to slice through the planes from any axis. That shouldn’t be difficult, just will take more time.
This is something we had been discussing over at CGTalk, we couldn’t find a way to figure out Reaction Manager links through maxscript. It just is not exposed. Reaction Manager is like Set Driven in Maya or a Relation Constraint in MotionBuilder. In order to sync rigging components between the packages, you need to be able to query these driven relationships.
I set about doing this by checking dependencies, and it turns out it is possible. It’s a headache, but it is possible!
The problem is that even though slave nodes have controllers with names like “Float_Reactor”, the master nodes have nothing that distinguishes them. I saw that if I got dependents on a master node (it’s controllers, specifically the one that drives the slave), that there was something called ‘ReferenceTarget:Reaction_Master‘:
So here is a fn that gets Master information from a node:
fn getAllReactionMasterRefs obj =
(
local nodeRef
local ctrlRef
for n = 1 to obj.numSubs do
(
ctrl = obj[n].controller
if (ctrl!=undefined) then
(
for item in (refs.dependents ctrl) do
(
if item as string == "ReferenceTarget:Reaction_Master" then
(
nodeRef = (refs.dependentNodes item)
ctrlRef = ctrl
)
)
getAllReactionMasterRefs obj[n]
)
)
return #(nodeRef, ctrlRef)
)
fn getAllReactionMasterRefs obj =
(
local nodeRef
local ctrlRef
for n = 1 to obj.numSubs do
(
ctrl = obj[n].controller
if (ctrl!=undefined) then
(
for item in (refs.dependents ctrl) do
(
if item as string == "ReferenceTarget:Reaction_Master" then
(
nodeRef = (refs.dependentNodes item)
ctrlRef = ctrl
)
)
getAllReactionMasterRefs obj[n]
)
)
return #(nodeRef, ctrlRef)
)
The first item is an array of the referenced node, and the second is the controller that is driving *some* aspect of that node.
You now loop through this node looking for ‘Float_Reactor‘, ‘Point3_Reactor‘, etc, and then query them as stated in the manual (‘getReactionInfluence‘, ‘getReactionFalloff‘, etc) to figure out the relationship.
Here is an example function that prints out all reaction data for a slave node:
fn getAllReactionControllers obj =
(
local list = #()
for n = 1 to obj.numSubs do
(
ctrl = obj[n].controller
if (ctrl!=undefined) then
(
--print (classof ctrl)
if (classof ctrl) == Float_Reactor \
or (classof ctrl) == Point3_Reactor \
or (classof ctrl) == Position_Reactor \
or (classof ctrl) == Rotation_Reactor \
or (classof ctrl) == Scale_Reactor then
(
reactorDumper obj[n].controller data
)
)
getAllReactionControllers obj[n]
)
)
fn getAllReactionControllers obj =
(
local list = #()
for n = 1 to obj.numSubs do
(
ctrl = obj[n].controller
if (ctrl!=undefined) then
(
--print (classof ctrl)
if (classof ctrl) == Float_Reactor \
or (classof ctrl) == Point3_Reactor \
or (classof ctrl) == Position_Reactor \
or (classof ctrl) == Rotation_Reactor \
or (classof ctrl) == Scale_Reactor then
(
reactorDumper obj[n].controller data
)
)
getAllReactionControllers obj[n]
)
)
Here is the output from ‘getAllReactionControllers $Box2‘:
Conclusion
So, once again, no free lunch here. You can loop through the scene looking for Masters, then derive the slave nodes, then dump their info. It shouldn’t be too difficult as you can only have one Master, but if you have multiple reaction controllers in each node effecting the other; it could be a mess. I threw this together in a few minutes just to see if it was possible, not to hand out a polished, working implementation.
Over the past few years I have noticed that Photoshop often, usually after it is left idling for a few hours or days, no longer imports the windows clipboard.
Here is a fix if you don’t mind getting your hands dirty in the registry:
The above is for photoshop cs2, depending on your version you will have to look in different reg locations. There is also a problem when you hit a ‘size limit’ for an incoming clipboard image, and Photoshop dumps it. This can also be circumvented by editing the registry:
This is a simple proof-of-concept showing how to implement a perforce animation browser via python for MotionBuilder. Clicking an FBX animation syncs it and loads it.
The script can be found here: [p4ui.py], it requires the [wx] and [p4] libraries.
Clicking directories goes down into them, clicking fbx files syncs them and loads them in MotionBuilder. This is just a test, the ‘[..]’ doesn’t even go up directories. Opening an animation does not check it out, there is good documentation for the p4 python lib, you can start there; it’s pretty straight forward and easy: sure beats screen scraping p4 terminal stuff.
You will see the following, you should replace this with the p4 location of your animations, this will act as the starting directory.
path1 ='PUT YOUR PERFORCE ANIMATION PATH HERE (EXAMPLE: //DEPOT/ANIMATION)'
info = p4i.run("info")print info[0]['clientRoot']
path1 = 'PUT YOUR PERFORCE ANIMATION PATH HERE (EXAMPLE: //DEPOT/ANIMATION)'
info = p4i.run("info")
print info[0]['clientRoot']
That should about do it, there are plenty of P4 tutorials out there, my code is pretty straight forward. The only problem was where I instanced it, be sure to instance it with something other than ‘p4’, I did this and it did not work, using ‘p4i’ it did without incident:
This is a tip that a coworker (Tetsuji) showed me a year ago or so, I was pretty damn sure my ATI drivers were bluescreening my system, but I wanted to hunt down proof. So you have just had a bluescreen and your pc rebooted. Here’s how to hunt down what happened.
First thing you should see when you log back in is this:
It’s really important that you not do anything right now; especially don’t click one of those buttons. Click the ‘click here‘ text ad then you will see this window.
Ok, so this doesn’t tell us much at all. We want to get the ‘technical information’, so click the link for that and you will see something like this:
Here is why we did not click those buttons before; when you click those buttons, these files get deleted. So copy this path and go to this folder. Copy the contents elsewhere, and close all those windows. So you now have these three files:
The ‘dmp’ file (dump file) will tell us what bluescreened our machine, but we need some tools to read it. Head on over to the Microsoft site and download ‘Debugging Tools for Windows’ (x32, x64). Once installed, run ‘WinDbg‘. Select File->Open Crash Dump… and point it at your DMP file. This will open, scroll down and look for something like this:
In this example the culprit was ‘pgfilter.sys‘, something installed by ‘Peer Guardian’, a hacky privacy protection tool I use at home. There is a better way to cut through a dump file, you can also type in ‘!analyze -v‘, this will generate something like this:
In this example above you see that it’s an ATI driver issue, which I fixed by replacing the card with an nvidia and tossing the ATI into our IT parts box (junkbox).
Facial motion capture stabilization is basically where you isolate the movement of the face from the movement of the head. This sounds pretty simple, but it is actually a really difficult problem. In this post I will talk about the general process and give you an example facial stabilization python script.
Disclaimer: The script I have written here is loosely adapted from a MEL script in the book Mocap for Artists, and not something proprietary to Crytek. This is a great book for people of all experience levels, and has a chapter dedicated to facial mocap. Lastly, this script is not padded out or optimized.
To follow this you will need some facial mocap data, there is some freely downloadable here at www.mocap.lt. Grab the FBX file.
Stabilization markers
Get at least 3 markers on the actor that do not move when they move their face. These are called ’stabilization markers’ (STAB markers). You will use these markers to create a coordinate space for the head, so it is important that they not move. STAB markers are commonly found on the left and right temple, and nose bridge. Using a headband and creating virtual markers from multiple solid left/right markers works even better. Headbands move, it’s good to keep this in mind, above you see a special headrig used on Kong to create stable markers.
It is a good idea to write some tools to help you out here. At work I have written tools to parse a performance and tell me the most stable markers at any given time, if you have this data, you can also blend between them.
Load up the facial mocap file you have downloaded, it should look something like this:
In the data we have, you can delete the root, the headband markers, as well as 1-RTMPL, 1-LTMPL, and 1-MNOSE could all be considered STAB markers.
General Pipeline
As you can see, mocap data is just a bunch of translating points. So what we want to do is create a new coordinate system that has the motion of the head, and then use this to isolate the facial movement.
This will take some processing, and also an interactive user interface. You may have seen my tutorial on Creating Interactive MotionBuilder User Interface Tools. You should familiarize yourself with that because this will build on it. Below is the basic idea:
You create a library ‘myLib’ that you load into motionbuilder’s python environment. This is what does the heavy lifting, I say this because you don’t want to do things like send the position of every marker, every frame to your external app via telnet. I also load pyEuclid, a great vector library, because I didn’t feel like writing my own vector class. (MBuilder has no vector class)
Creating ‘myLib’
So we will now create our own library that sits inside MBuilder, this will essentially be a ‘toolkit’ that we communicate with from the outside. Your ‘myLib’ can be called anything, but this should be the place you store functions that do the real processing jobs, you will feed into to them from the outside UI later. The first thing you will need inside the MB python environment is something to cast FBVector3D types into pyEuclid. This is fairly simple:
#casts point3 strings to pyEuclid vectorsdef vec3(point3):
return Vector3(point3[0], point3[1], point3[2])#casts a pyEuclid vector to FBVector3ddef fbv(point3):
return FBVector3d(point3.x, point3.y, point3.z)
#casts point3 strings to pyEuclid vectors
def vec3(point3):
return Vector3(point3[0], point3[1], point3[2])
#casts a pyEuclid vector to FBVector3d
def fbv(point3):
return FBVector3d(point3.x, point3.y, point3.z)
Next is something that will return an FBModelList of models from an array of names, this is important later when we want to feed in model lists from our external app:
#returns an array of models when given an array of model names#useful with external apps/telnetlib uidef modelsFromStrings(modelNames):
output =[]for name in modelNames:
output.append(FBFindModelByName(name))return output
#returns an array of models when given an array of model names
#useful with external apps/telnetlib ui
def modelsFromStrings(modelNames):
output = []
for name in modelNames:
output.append(FBFindModelByName(name))
return output
Now, if you were to take these snippets and save them as a file called myLib.py in your MBuilder directory tree (MotionBuilder75 Ext2\bin\x64\python\lib), you can load them into the MBuilder environment. (You should have also placed pyEuclid here)
It’s always good to mock-up code in telnet because, unlike the python console in MBuilder, it supports copy/paste etc..
In the image above, I get the position of a model in MBuilder, it returns as a FBVector3D, I then import myLib and pyEuclid and use our function above to ‘cast’ the FBVector3d to a pyEuclid vector. It can now be added, subtracted, multiplied, and more; all things that are not possible with the default MBuilder python tools. Our other function ‘fbv()‘ casts pyEuclid vectors back to FBVector3d, so that MBuilder can read them.
So we can now do vector math in motionbuilder! Next we will add some code to our ‘myLib’ that stabilizes the face.
Adding Stabilization-Specific Code to ‘myLib’
One thing we will need to do a lot is generate ‘virtual markers’ from the existing markers. To do this, we need a function that returns the average position of however many vectors (marker positions) it is fed.
#returns average position of an FBModelList as FBVector3ddef avgPos(models):
mLen =len(models)if mLen ==1:
return models[0].Translation
total = vec3(models[0].Translation)for i inrange(1, mLen):
total += vec3(models[i].Translation)
avgTranslation = total/mLen
return fbv(avgTranslation)
#returns average position of an FBModelList as FBVector3d
def avgPos(models):
mLen = len(models)
if mLen == 1:
return models[0].Translation
total = vec3(models[0].Translation)
for i in range (1, mLen):
total += vec3(models[i].Translation)
avgTranslation = total/mLen
return fbv(avgTranslation)
Here is an example of avgPos() in use:
Now onto the stabilization code:
#stabilizes face markers, input 4 FBModelList arrays, leaveOrig for leaving original markersdef stab(right,left,center,markers,leaveOrig):
pMatrix = FBMatrix()
lSystem=FBSystem()
lScene = lSystem.Scene
newMarkers =[]def faceOrient():
lScene.Evaluate()
Rpos = vec3(avgPos(right))
Lpos = vec3(avgPos(left))
Cpos = vec3(avgPos(center))#build the coordinate system of the head
faceAttach.GetMatrix(pMatrix)
xVec =(Cpos - Rpos)
xVec = xVec.normalize()
zVec =((Cpos - vec3(faceAttach.Translation)).normalize()).cross(xVec)
zVec = zVec.normalize()
yVec = xVec.cross(zVec)
yVec = yVec.normalize()
facePos =(Rpos + Lpos)/2
pMatrix[0]= xVec.x
pMatrix[1]= xVec.y
pMatrix[2]= xVec.z
pMatrix[4]= yVec.x
pMatrix[5]= yVec.y
pMatrix[6]= yVec.z
pMatrix[8]= zVec.x
pMatrix[9]= zVec.y
pMatrix[10]= zVec.z
pMatrix[12]= facePos.x
pMatrix[13]= facePos.y
pMatrix[14]= facePos.z
faceAttach.SetMatrix(pMatrix,FBModelTransformationMatrix.kModelTransformation,True)
lScene.Evaluate()#keys the translation and rotation of an animNodeListdef keyTransRot(animNodeList):
for lNode in animNodeList:
if(lNode.Name=='Lcl Translation'):
lNode.KeyCandidate()if(lNode.Name=='Lcl Rotation'):
lNode.KeyCandidate()
Rpos = vec3(avgPos(right))
Lpos = vec3(avgPos(left))
Cpos = vec3(avgPos(center))#create a null that will visualize the head coordsys, then position and orient it
faceAttach = FBModelNull("faceAttach")
faceAttach.Show=True
faceAttach.Translation= fbv((Rpos + Lpos)/2)
faceOrient()#create new set of stabilized nulls, non-destructive, this should be tied to 'leaveOrig' laterfor obj in markers:
new= FBModelNull(obj.Name + '_stab')
newTran = vec3(obj.Translation)new.Translation= fbv(newTran)new.Show=Truenew.Size=20new.Parent= faceAttach
newMarkers.append(new)
lPlayerControl = FBPlayerControl()
lPlayerControl.GotoStart()
FStart =int(lPlayerControl.ZoomWindowStart.GetFrame(True))
FStop =int(lPlayerControl.ZoomWindowStop.GetFrame(True))
animNodes = faceAttach.AnimationNode.Nodesfor frame inrange(FStart,FStop):
#build proper head coordsys
faceOrient()#update stabilized markers and key themfor m inrange(0,len(newMarkers)):
markerAnimNodes = newMarkers[m].AnimationNode.Nodes
newMarkers[m].SetVector(markers[m].Translation.Data)
lScene.Evaluate()
keyTransRot(markerAnimNodes)
keyTransRot(animNodes)
lPlayerControl.StepForward()
#stabilizes face markers, input 4 FBModelList arrays, leaveOrig for leaving original markers
def stab(right,left,center,markers,leaveOrig):
pMatrix = FBMatrix()
lSystem=FBSystem()
lScene = lSystem.Scene
newMarkers = []
def faceOrient():
lScene.Evaluate()
Rpos = vec3(avgPos(right))
Lpos = vec3(avgPos(left))
Cpos = vec3(avgPos(center))
#build the coordinate system of the head
faceAttach.GetMatrix(pMatrix)
xVec = (Cpos - Rpos)
xVec = xVec.normalize()
zVec = ((Cpos - vec3(faceAttach.Translation)).normalize()).cross(xVec)
zVec = zVec.normalize()
yVec = xVec.cross(zVec)
yVec = yVec.normalize()
facePos = (Rpos + Lpos)/2
pMatrix[0] = xVec.x
pMatrix[1] = xVec.y
pMatrix[2] = xVec.z
pMatrix[4] = yVec.x
pMatrix[5] = yVec.y
pMatrix[6] = yVec.z
pMatrix[8] = zVec.x
pMatrix[9] = zVec.y
pMatrix[10] = zVec.z
pMatrix[12] = facePos.x
pMatrix[13] = facePos.y
pMatrix[14] = facePos.z
faceAttach.SetMatrix(pMatrix,FBModelTransformationMatrix.kModelTransformation,True)
lScene.Evaluate()
#keys the translation and rotation of an animNodeList
def keyTransRot(animNodeList):
for lNode in animNodeList:
if (lNode.Name == 'Lcl Translation'):
lNode.KeyCandidate()
if (lNode.Name == 'Lcl Rotation'):
lNode.KeyCandidate()
Rpos = vec3(avgPos(right))
Lpos = vec3(avgPos(left))
Cpos = vec3(avgPos(center))
#create a null that will visualize the head coordsys, then position and orient it
faceAttach = FBModelNull("faceAttach")
faceAttach.Show = True
faceAttach.Translation = fbv((Rpos + Lpos)/2)
faceOrient()
#create new set of stabilized nulls, non-destructive, this should be tied to 'leaveOrig' later
for obj in markers:
new = FBModelNull(obj.Name + '_stab')
newTran = vec3(obj.Translation)
new.Translation = fbv(newTran)
new.Show = True
new.Size = 20
new.Parent = faceAttach
newMarkers.append(new)
lPlayerControl = FBPlayerControl()
lPlayerControl.GotoStart()
FStart = int(lPlayerControl.ZoomWindowStart.GetFrame(True))
FStop = int(lPlayerControl.ZoomWindowStop.GetFrame(True))
animNodes = faceAttach.AnimationNode.Nodes
for frame in range(FStart,FStop):
#build proper head coordsys
faceOrient()
#update stabilized markers and key them
for m in range (0,len(newMarkers)):
markerAnimNodes = newMarkers[m].AnimationNode.Nodes
newMarkers[m].SetVector(markers[m].Translation.Data)
lScene.Evaluate()
keyTransRot(markerAnimNodes)
keyTransRot(animNodes)
lPlayerControl.StepForward()
We feed our ‘stab‘function FBModelLists of right, left, and center stabilization markers, it creates virtual markers from these groups. Then ‘markers’ is all the markers to be stabilized. ‘leavrOrig’ is an option I usually add, this allows for non-destructive use, I have just made the fn leave original in this example, as I favor this, so this option does nothing, but you could add it. With the original markers left, you can immediately see if there was an error in your script. (new motion should match orig)
Creating an External UI that Uses ‘myLib’
Earlier I mentioned Creating Interactive MotionBuilder User Interface Tools, where I explain how to screenscrape/use the telnet Python Remote Server to create an interactive external UI that floats as a window in MotionBuilder itself. I also use the libraries mentioned in the above article.
The code for the facial stabilization UI I have created is here: [stab_ui.py]
I will now step through code snippets pertaining to our facial STAB tool:
def getSelection():
selectedItems = []
mbPipe("selectedModels = FBModelList()")
mbPipe("FBGetSelectedModels(selectedModels,None,True)")
for item in (mbPipe("for item in selectedModels: print item.Name")):
selectedItems.append(item)
return selectedItems
This returns a list of strings that are the currently selected models in MBuilder. This is the main thing that our external UI does. The person needs to interactively choose the right, left, and center markers, then all the markers that will be stabilized.
At the left here you see what the UI looks like. To add some feedback to the buttons, you can make them change to reflect that the user has selected markers. We do so by changing the button text.
Example:
def rStabClick(self,event):
self.rStabMarkers= getSelection()printstr(self.rStabMarkers)self.rStab.Label=(str(len(self.rStabMarkers)) + " Right Markers")
This also stores all the markers the user has chosen into the variable ‘rStabMarkers‘. Once we have all the markers the user has chosen, we need to send them to ‘myLib‘ in MBuilder so that it can run our ‘stab‘ function on them. This will happen when they click ‘Stabilize Markerset‘.
Above we now use ‘modelsFromStrings‘ to feed ‘myLib’ the names of selected models. When you run this on thousands of frames, it will actually hang for up to a minute or two while it does all the processing. I discuss optimizations below. Here is a video of what you should have when stabilization is complete:
Kill the keyframes on the root (faceAttach) to remove head motion
Conclusion: Debugging/Optimization
Remember: Your stabilization will only be as good as your STAB markers. It really pays off to create tools to check marker stability.
Sometimes the terminal/screen scraping runs into issues. The mbPipe function can be padded out a lot and made more robust, this here was just an example. If you look at the external python console, you can see exactly what mbPipe is sending to MBuilder, and what it is receiving back through the terminal:
Sending>>> selectedModels = FBModelList()
Sending>>> FBGetSelectedModels(selectedModels,None,True)
Sending>>> for item in selectedModels: print item.Name
['Subject 1-RH1', 'Subject 1-RTMPL']
Sending>>> selectedModels = FBModelList()
Sending>>> FBGetSelectedModels(selectedModels,None,True)
Sending>>> for item in selectedModels: print item.Name
['Subject 1-RH1', 'Subject 1-RTMPL']
All of the above can be padded out and optimized. For instance, you could try to do everything without a single lPlayerControl.StepForward() or lScene.Evaluate(), but this takes a lot of MotionBuilder/programming knowhow; it involves only using the keyframe data to generate your matrices, positions etc, and never querying a model.