Stumbling Toward 'Awesomeness'

A Technical Art Blog

Sunday, April 11, 2010

MPO to JPS and PNS

I got some good feedback from the last post and updated the script to export JPEG Stereo (JPS) and PNG Stereo (PNS, really.) This way you can convert your images into a single lossless image that you can pop into photoshop and adjust hsv/levels, etc.

import mpo
mpo.makePNS('DSCF9463.MPO')
#>>Saving image: DSCF9463.PNS
#>>Save complete.

This is a super simple python script, no error padding. Also, keep in mind that coming from most modern camera rigs, you are saving like a 20-40 megapixel PNG compressed file here, wait until it says it is done saving, it may take a few seconds.

posted by admin at 10:31 PM  

Sunday, April 11, 2010

Splitting MPO Files with ExifTool and Python

Many stereo cameras are using the new MPO format to store multiple images in a file. Unfortunately, nothing really works with these files (Other than Stereo Photo Maker). Here is a simple python wrapper around ExifTool that will extract the Right and Left image, and return EXIF data as a dict. I think this is probably easier than explaining how to use ExifTool, but you can see from looking at the simple wrapper code.

import mpo
 
#Name of MPO file, name of output, whether or not you want all EXIF in a txt log
mpo.extractImagePair('DSCF9463.MPO', 'DSCF9463', True)
#>>Created DSCF9463_R.jpg
#>>Created DSCF9463_L.jpg
#>>Writing EXIF data

The above leaves you with two images and a text file that has all the EXIF data, even attributes that xnView and other apps do not read:

exif =  getExif('DSCF9463.MPO')
print exif["Convergence Angle"]
#>>0
print exif["Field Of View"]
#>>53.7 deg
print exif["Focal Length"]
#>>6.3 mm (35 mm equivalent: 35.6 mm)
posted by admin at 2:58 AM  

Wednesday, April 7, 2010

PyQt4 UIC Module Example

I have been really amazing myself at how much knowledge I have forgotten in the past five or six months… Most of the work I did in the past year utilized the UIC module to load UI files directly, but I can find very little information about this online. I was surprised to see that even the trusty old Rapid GUI Programming with Python and Qt book doesn’t cover loading UI files with the UIC module.

So, here is a tiny script with UI file [download] that will generate a pyqt example window that does ‘stuff’:

import sys
from PyQt4 import QtGui, QtCore, uic
 
class TestApp(QtGui.QMainWindow):
	def __init__(self):
		QtGui.QMainWindow.__init__(self)
 
		self.ui = uic.loadUi('X:/projects/2010/python/pyqt_tutorial/pyqt_tutorial.ui')
		self.ui.show()
 
		self.connect(self.ui.doubleSpinBox, QtCore.SIGNAL("valueChanged(double)"), spinFn)
		self.connect(self.ui.comboBox, QtCore.SIGNAL("currentIndexChanged(QString)"), comboFn)
		self.connect(self.ui.pushButton, QtCore.SIGNAL("clicked()"), buttonFn)
 
def spinFn(value):
	win.ui.doubleSpinBoxLabel.setText('doubleSpinBox is set to ' + str(value))
def buttonFn():
	win.ui.setWindowTitle(win.ui.lineEdit.text())
def comboFn(value):
	win.ui.comboBoxLabel.setText(str(value) + ' is selected')
 
if __name__ == "__main__":
	app = QtGui.QApplication(sys.argv)
	win = TestApp()
	sys.exit(app.exec_())

Change the path to reflect where you have saved the UI file, and when you run the script you should get this:

EDIT: A few people have asked me to update this for other situations

PySide Inside Maya:

import sys
from PySide.QtUiTools import *
from PySide.QtCore import *
from PySide.QtGui import *
 
class TestApp(QMainWindow):
	def __init__(self):
		QMainWindow.__init__(self)
 
		loader = QUiLoader()
		self.ui = loader.load('c:/pyqt_tutorial.ui')
		self.ui.show()
 
		self.connect(self.ui.doubleSpinBox, SIGNAL("valueChanged(double)"), spinFn)
		self.connect(self.ui.comboBox, SIGNAL("currentIndexChanged(QString)"), comboFn)
		self.connect(self.ui.pushButton, SIGNAL("clicked()"), buttonFn)
 
def spinFn(value):
	win.ui.doubleSpinBoxLabel.setText('doubleSpinBox is set to ' + str(value))
def buttonFn():
	win.ui.setWindowTitle(win.ui.lineEdit.text())
def comboFn(value):
	win.ui.comboBoxLabel.setText(str(value) + ' is selected')
 
win = TestApp()

PyQT Inside Maya:

import sys
from PyQt4 import QtGui, QtCore, uic
 
class TestApp(QtGui.QMainWindow):
	def __init__(self):
		QtGui.QMainWindow.__init__(self)
 
		self.ui = uic.loadUi('c:/pyqt_tutorial.ui')
		self.ui.show()
 
		self.connect(self.ui.doubleSpinBox, QtCore.SIGNAL("valueChanged(double)"), spinFn)
		self.connect(self.ui.comboBox, QtCore.SIGNAL("currentIndexChanged(QString)"), comboFn)
		self.connect(self.ui.pushButton, QtCore.SIGNAL("clicked()"), buttonFn)
 
def spinFn(value):
	win.ui.doubleSpinBoxLabel.setText('doubleSpinBox is set to ' + str(value))
def buttonFn():
	win.ui.setWindowTitle(win.ui.lineEdit.text())
def comboFn(value):
	win.ui.comboBoxLabel.setText(str(value) + ' is selected')
 
win = TestApp()
posted by admin at 11:54 PM  

Wednesday, April 7, 2010

RigPorn: Uncharted 2

My friends Judd and Rich gave a talk on some of the Character Tech behind Uncharted 2. Here are the slides.

posted by admin at 8:32 PM  

Wednesday, April 7, 2010

PyQt4 in wSciTE

I have gotten back into some pyqt in my spare time, just because it’s what I used on a daily basis at the last place I worked at. However, I had trouble getting it to run in my text editor of choice. (SciTE)

I couldn’t find a solution with like 45 minutes of googling. When trying to import PyQt4 it would give me a dll error, but I could paste the code into IDLE and it would execute fine. I found a solution by editing the python preferences of SciTE. I noticed that it wasn’t running python scripts the way IDLE was, but compiling them (?). I edited the last line to just run the script, and viola! It worked.

Find this line (usually the last):

command.1.*.py=python -c "import py_compile; py_compile.compile(r'$(FilePath)')"

And change it to:

command.1.*.py=python "(r'$(FilePath)')"

I don’t really know if this messes anything else up, but it does allow the PyQt4 libs to load and do their thing.

posted by admin at 8:04 PM  

Tuesday, March 30, 2010

32K Sistine Chapel CubeMap [Python How-To]

The Vatican recently put up an interactive Sistine Chapel flash application. You can pan around the entire room and zoom in and out in great detail.

The Vatican is not very open with it’s art, the reason they scream ‘NO PHOTO’ when you pull a camera out in the chapel is that they sold the ability to take photos of it to a Japanese TV Station (Nippon TV) for 4.2 million dollars. Because the ceiling has long been in the public domain, the only way they can sell ‘the right to photograph’ the ceiling is by screwing over us tourists who visit. If you take a photo, they have no control over that image –because they don’t own the copyright of the work.

Many of you who know me, know I am a huge fan of Michelangelo’s work, this data was just too awesomely tempting and when I saw it posted publicly online, I really wanted to get my hands on the original assets.

Here is a python script to grab all of the image tiles that the flash app reads, and then generate the 8k faces of the cubemap. In the end you will have a 32,000 pixel cubemap.

First we copy the swatches from the website:

def getSistineCubemap(saveLoc):
	import urllib
	#define the faces of the cubemap, using their own lettering scheme
	faces = ['f','b','u','d','l','r']
	#location of the images
	url = 'http://www.vatican.va/various/cappelle/sistina_vr/Sistine-Chapel.tiles/l3_'
	#copy all the swatches to your local drive
	for face in faces:
		for x in range(1,9):
			for y in range(1,9):
				file = (face + '_' + str(y) + '_' + str(x) + '.jpg')
				urllib.urlretrieve((url + face + '_' + str(y) + '_' + str(x) + '.jpg'), (saveLoc + file))
				urllib.urlcleanup()
				print "saved " + file

Next we use PIL to stitch them together:

def stitchCubeMapFace(theImage, x, y, show):
	from PIL import Image, ImageDraw
	from os import path
 
	file = theImage.split('/')[-1]
	fileSplit = file.split('_')
	im = Image.open(theImage)
	#create an 8k face from the first swatch
	im = im.resize((8000, 8000), Image.NEAREST)
	thePath = path.split(theImage)[0]
 
	xPixel = 0
	yPixel = 0
	#loop through the swatches, stitching them together
	for y_ in range(1, x+1):
		for x_ in range(1,y+1):
			if yPixel == 8000:
				yPixel = 0
			nextImage = (thePath + '/' + fileSplit[0] + '_' + str(x_) + '_' + str(y_) + '.jpg')
			print ('Merging ' + nextImage + ' @' + str(xPixel) + ',' + str(yPixel))
			loadImage = Image.open(nextImage)
			im.paste(loadImage, (xPixel, yPixel))
			yPixel += 1000
		xPixel += 1000
	saveImageFile = (thePath + '/' + fileSplit[0] + '_face.jpg')
	print ('Saving face: ' + saveImageFile)
	#save the image
	im.save(saveImageFile, 'JPEG')
	#load the image in default image viewer for checking
	if show == True:
		import webbrowser
		webbrowser.open(saveImageFile)

Here is an example of the input params:

getSistineCubemap('D:/sistineCubeMap/')
stitchCubeMapFace('D:/sistineCubeMap/r_1_1.jpg', 8, 8, True)
posted by admin at 7:42 PM  

Thursday, December 31, 2009

Avatar: Aspect Ratio Note

cameron-avatar-aspectratios-compimg

Size Matters.

Theaters presenting Avatar in 2D and Real3D, show a cropped 2.35:1 version, while 3D IMAX shows the original work at 1.85:1. You might not think that this matters, but you are losing a lot of the image in the crop. If you want to see it as the artists/director intended it looks like IMAX 3D is your only option.

posted by admin at 7:41 PM  

Sunday, December 27, 2009

Update

I haven’t posted in a while, lots of changes going on, I left ILM after Avatar, and have moved back to Germany where my girlfriend is finishing medical school. I promise a good tech art post soon (my pick for tech art game of the year!) I look to be rejoining Crytek next year, working on Crysis 2.

posted by admin at 2:34 AM  

Sunday, December 27, 2009

Decode the Hype: HP DreamScreen 130 Review

FAIL.

FAIL.

Decode the Hype

Being digital artists, photo frames might look like attractive ways to showcase art and content, these devices are being pushed more and more. I got HPs ‘flagship’ model as a present, it retails for $300! I was so excited, but not for long. Unable to find any info online especially reviews, I thought I would post this here.

Lets first just get some things out of the way before I talk about the quality of what the device DOES do lets talk about what it does not, yet claims to do.

Downright Lies

The following quotes are from the HP site itself:

The HP DreamScreen is a gateway to the Internet using your wireless network to access
weather info, Snapfish and your favorite web destinations.

This is just untrue. There is no integrated web browser. It has three web ‘apps’ on it: SnapFish, Pandora, Facebook. That’s it. It does not read RSS feeds, or do much of anything you probably want it to do, simple things like display news or recipes.

Stay current with social network sites like Facebook

‘Like’ facebook? There is only Facebook: nothing else.

Be organized with a built-in alarm clock & calendar.

This is laughable. Wondering how to sync the calendar with outlook or google or anything; maybe even just add appointments, I finally consulted their online documentation. Here, seriously, is the feature list for the calendar ‘app’:

View the current month, press right or left to view the next or previous month.

BWAH HA HA HA… *sigh*

Easy wireless access to your digital entertainment

It shows an icon for a video, but actually; it doesn’t stream video, it plays some videos, only at specific resolutions from specific codecs; off physical media.

Touch-enabled controls—Get fast, easy access to information and entertainment with simple touch controls embedded in the display

This is referring to some buttons around the bezel of the screen and is just so untrue they would have to change the marketing campaign in Europe or get sued. This does however remind me of the old In Living Color sketch where the handcapped superhero always says he is ‘not handicapped, but HANDY-CAPABLE!’.

Videos—Watch home movies and video clips in full screen – Its simple!

It’s as simple as taking your video, recompressing it to a supported video codec, resizing it to a specific resolution, and then physically transferring it ot the device –so simple grandma could do it! (with gordian knot, virtualdub, CCCP, and all those other video tools she has)

Decode the Hype: The Screen

Resolution

The thing is a frickin’ 300 dollar photo frame, but it’s resolution is 800 x 480, this equates to 0.38 megapixels, at the time the frame came out, the average cheap point n shoot ranks 9 to 10 megapixels: this is well over twenty times the resolution of the screen!

Because of this, it can take 10 full seconds to load a photo and downsample it to 800 pixels from it’s original resolution. This makes browsing photos a pain, and loading photos from your camera cards nearly useless. Power users will use photoshop or xnView to batch all their content to 800 pixels.

There is aliasing galore, as 800 pixels is the resolution of many phones and handheld devices, not 13″ photo frames!

UPDATE: I have talked to HP and done some hunting, uncovering something that is just ridiculous: The DreamScreen 130 has a 1280×800 resolution. However, HPs software only works ar 800×480, the resolution of the cheapest screen, (the 100). To get around this, they upscale to 1280 pixels. This means they down sample your image to 800 pixels, then they upscale it with a software upscaler, so your pictures will ALWAYS LOOK LIKE JAGGY AND SOFT: NO MATTER WHAT YOU DO. This is a joke, HP should be ashamed of themselves.

Notice the terrible artifacts from the 1280 image, which was downscaled by the frame software, then upscaled to fit the panel.

Notice the terrible artifacts from the 1280 image, which was downscaled by the frame software, then upscaled to fit the panel.

Color Reproduction

It is a cheap TN panel, the gamma of your images widely fluctuates depending on the angle they are viewed. I would be ok if they had a low resolution but used a nice IPS, SIPS, or OLED panel, but this is just unremarkable. The black point is a dark shade of grey, in all seriousness, the panel quality seems on par with something like the panels they use in the dashboard of a Prius, or other industrial UI readouts.

banding

Pretty terrible banding

Pretty bad black point

Pretty bad black point

Pretty bad white point

Pretty bad white point

Pretty mediocre contrast

Pretty mediocre contrast

Decode the Hype: Misc Tech Tidbits

Streaming / Network

Streaming requires lots of Microsoft Windows Media software and services running on a PC server in your house that is always on, they relied on this instead of doing the footwork themselves. If you were under the impression from their marketing that it could read files off samba shares or work with Macintosh, you would be wrong.

Software / User Interface

The software is pretty terrible. It is very clunky and unresponsive. Many times it does not recognize that physical media has been inserted and must be rebooted. The UI graphics themselves show terrible compression artifacts.

dscreen

When you bring up the on screen keyboard to type in, say, the name of the device, it clearly shows buttons like [HTTP://], [www.], [.com], and others to make it easier to browse the web, however there is no web browser! There are other places in the print ads and UI itself that refer to features the device just does not have!

“Touch Screen”

The device claims to have a ‘touch sensitive screen’, and IT DOES! A small area around the bezel of the screen has botons that can be pressed/touched! This product is in NO WAY a touch screen device, and has no touch sensitivity, other than the buttons on the bezel, the marketing is a lie.

Open Source?

On the CD that ships with it, they have a ton of readme files showing they used a lot of GPL’d code, however the source installer did not work on my windows 7, x64.

Conclusion

Pros:

  • They used Linux and GPL’d code so they will have to release theirs soon, hopefully it will be taken under the wing of the open source community and all these issues can be fixed by hard working college students and kids in their spare time.
  • The packaging/box is very high quality with a great look and feel

Cons:

  • The screen is low res and low quality
  • The device is way overpriced for the quality of it’s screen and software
  • The docs and UI refer to features that just do not exist
  • No battery, it must always remain plugged into the wall
  • Super-glossy, all you may be seeing is windows!
  • Software-wise, the average cellphone is vastly superior in extensibility and quality (browsing photos, playing mp3s, videos…)
  • The UI looks like a rip of cell phone UIs, but only in pictures… There are no smooth animated transitions, nothing in common with the user interfaces they seemed to want to copy. To an experienced person, the UI feels like something HP outsourced to Asia and sent them a poor art-bible of the end product they were expecting…
  • The device seems unfinished
posted by admin at 2:33 AM  

Monday, August 17, 2009

See 25 Minutes of Avatar this Friday, Free!

http://www.avatarmovie.com/

posted by admin at 7:02 PM  

Wednesday, July 8, 2009

Buggy Camera Issues In Maya on x64

Many, many people are having weird, buggy camera issues where you rotate a view and it snaps back to the pre-tumbled state (view does not update properly). There are posts all over, and Autodesk’s official response is “Consumer gaming videocards are not supported”. Really? That’s basically saying: All consumer video cards, gaming or not, are unsupported. I have had this issue on my laptop, which is surely not a ‘gaming’ machine. Autodesk says the ‘fix’ is to upgrade to an expensive pro-level video card. But what they maybe would tell you if they weren’t partnered with nVidia is: It’s an easy fix!

Find your Maya ENV file:

C:\Documents and Settings\Administrator\My Documents\maya\2009-x64\Maya.env

And add this environment variable to it:

MAYA_GEFORCE_SKIP_OVERLAY=1

Autodesk buried this information in their Maya 2009 Late Breaking Release Notes, and it fixes the issue completely! However, even on their official forum, Autodesk employees and moderators reply to these draw errors as follows:

Maya 2009 was tested with a finite number of graphics cards from ATI and Nvidia, with drivers from each vendor that provided the best performance, with the least amount of issues. (at the time of product launch).  A list of officially qualified hardware can be found here: http://www.autodesk.com/maya-hardware. Maya is not qualified/supported on consumer gaming cards.  Geforce card users can expect to have issues.  This is clearly stated in the official qualification charts mentioned above.

posted by admin at 10:43 AM  

Tuesday, June 30, 2009

Critical Analysis

One of the Year’s Worst Films

Transformer’s 2 was rated by critics at around a 18% as shown on Rottentomatoes.com. This is possibly one of the lowest ratings for a hugely expensive summer blockbuster that I can remember. This makes the movie less well reviewed than Species III, Rambo IV, or even Rush Hour III.

But it has now shown to have had the second largest opening weekend of all time; raking in over 200 million dollars domestically and 390 million worldwide in it’s first 5 days. This is within 1% of the current reigning champion, Batman: The Dark Knight. Paramount’s national exit polling revealed that more than 90% of those surveyed said the new movie was as good as or better than the first film. About 67% of moviegoers polled said the film was “excellent,” an even better score than that generated by Paramount’s “Star Trek,” one of the year’s best-reviewed movies.

The critics unanimously told their readers this film was trash, and word of mouth brought the film to within one percent of the Dark Knight. Hell, Transformer’s 2 was shown on less screens and even grossed more dollars per screen than the Dark Knight.

So how did a movie that so many flocked to see; nearly toppling the current reigning all time champ, get reviewed so viciously?

As reviews started to roll in, I saw an interesting thing happen. Some reviews were posted before people had seen the film, trashing the Michael Bay, and not really referencing anything from the film itself. (These were not logged as ‘top critics’ on the site.) But it initiated a torrent of others jumping on the hatewagon; beating their chests and scampering in competition to come up with better, more scurrilous, insulting, and defamatory witticisms trashing the director and his film. It became what I termed a giant ‘snoodBall’. Each critic seemed to feel that in order to stand out above the rest, he had to give an even worse, more scathing review. This led to professional critics actually printing things I just find ridiculous:

“I hated every one of the 149 minutes. This is so bad it’s immoral. Michael Bay is a time-sucking vampire who will feast off your lost time.”
– Victoria Alexander

“Michael Bay has once again transformed garbage into something resembling a film..”
– Jeffrey M. Anderson

“Transformers: The Revenge of The Fallen is beyond bad, it carves out its own category of godawfulness.”
– Peter Travers (Rolling Stone)

Who can say they actively *hated* every minute of a movie? I was so surprised. I had seen an advance screening of the film here at ILM, and I knew it was no Citizen Kane, but it surely isn’t an 18%! It seems the reviewers are so disjointed from the public they serve. Apparently there comes a certain time when you simply cannot write a decent review for a movie that all your peers said was garbage, and that is when you are just adding to this gigantic hate machine and not really reviewing anything.

If the film would have been reviewed even a little more realistically (I mean come on, Terminator IV even has a 33%!) it would have easily had the 1% more to topple the Dark Knight; possibly becoming the worst reviewed #1 box office hit of all time.

posted by Chris at 5:00 AM  

Wednesday, March 4, 2009

Common Character Issues: Attachments

I love this picture. It illustrates a few large problems with video games. One of which I have wanted to talk about for a while: Attachments of course. I am talking about the sword (yes, there is a sword, follow her arm to the left side of the image..)

Attaching props to a character that has to dynamically be seen from every angle through thousands of animations can be difficult. So difficult that people often give up. This was a promotional image for an upcoming Soul Calibur title, this goes to show you how difficult the issue is. Or maybe no one even noticed she was holding a sword. So let’s look at a promotional image from another game:

Why does it happen?

Well, props are often interchangeable. Many different props are supposed to attach to the same location throughout the game. This is generally done by marking up the prop and the skeleton with two attachment points that snap to one another.

In this case you often have one guy modeling the prop, one guy placing the skeleton, and one guy creating the animation. All these people have to work together.

How can we avoid these problems?

This problem is most noticeable at the end of the line: you would really only see it in the game. But this is one of the few times you will hear me say that checking it ‘in the engine’ is a bad idea. It’s hard enough to get animators to check their animation, much less test all the props in a ‘prop test level’ of sorts.

I feel problems like this mainly show up in magazines and final games because you are leaving it up to QA and other people who don’t know what to look for. There was a saying I developed while at Crytek when trying to impart some things on new tech art hires: “Does QA know what your alien should deform like? And should they?” The answer is no, and it also goes for the things above. Who knows how robotnik grips his bow.. you do, the guy rigging the character.

So in this case I am all for systems that allow animators to instantly load any common weapons and props from the game directly onto the character in the DCC app. You need a system that allows animators to be able to attach any commonly used prop at any time during any animation (especially movement anims)

Order of operations

Generally I would say:

  1. The animator picks a pivot point on the character. They will be animating/pivoting around this.
  2. The tech artist ‘marks’ up the skeleton with the appropriate offset transform.
  3. The modeler ‘marks’ his prop and tests it (iteratively) on one character
  4. The tech artist adds the marked up prop (or low res version) to a special file that is streamlined for automagically merging in items. Then adds a UI element that allows the animator to select the prop from a drop down and see it imported and attached to the character.

Complications

I can remember many heated discussions about problems like this. The more people that really care about hte final product, and the more detailed or realistic games and characters get, the more things like this will be scrutinized.

This is more of a simple problem that just takes care and diligence, whereas things like multiple hand positions and hand poses are a little more difficult. Or attachments that attach via a physics constraint in the engine. There are also other, much more difficult issues in this realm, like exact positioning of AI characters for interacting with each other and the environment, which is another tough ‘snap me into the right place’ problem dealing with marking up a character and an item in the world to interact with.

posted by Chris at 11:25 AM  

Monday, March 2, 2009

Make a 3D Game for the Right Reasons! (My SF4 post)

I ran out and got Street Fighter 4 (SF4) just like everyone else. Street Fighter was ‘the game’ you had to beat all the kids in the neighborhood at for an entire generation (sadly replaced by PES), and I have very fond memories of playing it.

SF4 is the first 3D game in the series created by Capcom, in the past, Street Fighter EX was developed by Arika, a company formed by one of the creators of the original game as well as many other Capcom employees. Even though porting the franchise to 3D was largely considered a complete and utter failure, they decided to give it another go, this time ‘officially’.

Strengths and Weaknesses

As artistic mediums, 2D and 3D are very different. 3D art is perspective correct, it is clean, sterile and perfect. It is much simpler to do rotations and transformations of rigid objects in 3D, this is why Disney started doing vehicles as cel shaded 3D objects in their later films. However, it is very difficult to add character to 3D geometry. As an example, think of Cruella Deville’s car from 101 Dalmatians, it has character. (When it’s not overly-rotoscoped from a real life model)

2D lends itself to organic shapes, which can be gestural, and are ‘rendered’ by a human being; so they’re never perfect. 3D is great for vehicles and space ships, anything that generally lacks character. 3D is also the only way you are going to get a photo-real gaming experience. For instance; when we were making Crysis, we knew this was the only way, there was never a question of which medium to use.

When I go on my ‘2D/3D rant’, I usually hearken back to something I love, so let’s take a look at the transition of an older game from 2D to 3D: the Monkey Island Series.

Many years ago developers felt that in order to compete, they had to ship games with the latest 3D technology. This is really unfortunate, and leads to them choosing to sometimes develop an ‘ok’ 3D game over a ‘beautiful’ 2D game. I believe in Curse of Monkey Island (last 2D title in the series (so far)), in the options menu there was an option to “Enable 3D acceleration”, upon clicking it, the words “We were only kidding” and other phrases pop up next to the radio button. The developers were already feeling the pressure to release a 3D game.

2D games are still profitable, just look at Xbox Live, where 2D games like Castle Crashers have been some of the top selling titles this year.

Lastly, lets not forget that 3D games are actually cheaper, or have been, historically. However, maybe not with some current-gen titles; where garbage cans have 4k texture maps and take two weeks to sculpt in Z Brush. But animation is definitely easier than it ever was. Of course the other side of that argument is that you can now have 6k animations in a game.

Street Fighter 4 Is A Three Dimensional ‘2D’ Game

Before going on, it’s important to note that in SF4, the characters still move on a 2D plane as they always have. It’s actually nearly identical to all the other games in the series as far as design.

As always, you are pitting your guy up against someone else, and both of your characters are the focal point, they are the only interactive things in a game which centers around them. This is a series that has always been about character, and has always been 2D with great hand drawn art. Remember: Capcom offered fans a 3D game and they did not want it.

So, SF4 is a game that takes place in 2D space and focuses on only two characters at any given time. This is great news, it means you can really focus on the characters, moreso than almost any other game genre.

The Constraints of a 3D Character Art Style

3D characters are driven by ‘joints’ or ‘bones’. Each joint has some 3D points rigidly ‘glued’ to it, because of this, 3D characters, especially in games, look rigid; like action figures. In my opinion SF4 characters feel like lifeless marionettes. In a 2D game, you can quickly and easily draw any form you want. The more you want to alter the ‘form’ a 3D character, the more joints it takes, and the more complex the ‘rig’ that drives the character. Also, on consoles, the number of joints you can use are limited. This is easily distinguished when comparing 2D and 3D art:

Notice how the 3D characters look lifeless? They don’t have to, it’s just more difficult. Whereas before, adding a cool facial expression meant simply drawing it by hand. Now it means sculpting a 3D shape: by hand. It’s tedious and difficult. Also, notice how in 3D Chun Li’s cloth is ‘clipping’ into her leg, or Cammy’s wrist guard is ‘clipping’ into her bicep. 3D is much more difficult to get right, because you are messing with sculptures, not drawings. You could also say the foreshortening on Chun Li’s arm in 2D looks weird; there are trade-offs, but in a 2D pipeline is also much easier to alter character proportions and fix things.

There are entire web pages dedicated to the weird faces of SF4 characters. It seems one of the easiest ways to make a character look in ‘pain’ was to translate the eyeballs out of the head: it looks ridiculous when compared to the hand-drawn hit reactions:

Whereas before you had one guy drawing pictures of a character in motion (maybe someone to color), now it takes a team to do the same job. You often have a modeler, technical artist, and animator, then hopefully a graphics engineer for rendering. That’s a lot of people to do something one person used to handle, and it introduces not only beaurocracy, but a complicated set of choreographed events that culminate in the final product.

This is a Capcom press image of Chun Li and it highlights my point exactly. It is harder and much more complicated to sculpt a form than draw it. Not to mention sculpt it over time, using complicated mathematical tools to manipulate geometry. However, it’s not an impossible task, and to think that this is ‘ok’ enough to release as a press image for an upcoming AAA game is crazy.

It’s not just deformation and posing, but animation in general. There is a lot of laziness associated with 3D animation. Let me be more precise: it is easier to ‘create’ animation because the computer interpolates between poses for you. As an animator, you have to work much harder to not fall into this ‘gap’ of letting the machine do the work for you. Playing SF4 you will see sweeps, hurricane kicks, and various other animations that just rotate the entire character as if on a pin. They also share and recycle the same animations on different characters, this was not possible in 2D.

One thing I find interesting is that, though the new game is 3D, it really has no motionblur. The 2D games had Chuck-Jonesesque motionblur drawn into the frames to add a quickness and ‘snap’, but it also adds an organic quality that is lacking in SF4.

EDIT: Having now logged a lot more time playing, there is indeed a weird kind of motion blur, it’s barely noticeable at all and looks almost hand painted/added.

Another odd thing, I can spot mocap when I see it, and I think the technique was used on some of the background characters, like the children playing under the bridge. The motion is so stellar that it puts the main characters to shame. That’s kind of sad. Though, all new characters introduced on the console seem to have much better animation, so maybe this is something Capcom have worked on more.

So Why Make A 3D Street Fighter?

If you aren’t going to make a game where characters can move through 3D space (no Z depth), why use a 3D art style, especially when it is harder to create expressive characters?

I will offer some reasons to ‘reboot’ the Street Fighter franchise as a 3D fighter:

  • Finally use collision detection to not have characters clip into one another as they always have
  • Use physics to blend ragdoll into hit reactions, also for hit detection and impulse generation; maybe allow a punch to actually connect with the opponent (gasp)
  • Use jiggly bones for something other than breasts/fat, things like muscles and flesh to add a sense of weight
  • Employ a cloth solver, c’mon this is a character showcase; if NBA games can solve cloth for all on court characters, you can surely do nice cloth for two.
  • Markup the skeletons to allow for ‘grab points’ so that throw hand positions vary on char size and are unique
  • Attach proxies to the feet and have them interact with trash/grass on the ground in levels
  • Use IK in a meaningful way to always look at your opponent, dynamically grab him in mid animation, always keep feet on slightly uneven ground, or hit diff size opponents (or parameterize the anims to do these)
  • Play different animations on different body parts at different times, you are not locked into the same full body on a frame like 2D
  • For instance: se ‘offset animations’ blended into the main animation set to dynamically create the health of the character, or heck, change the facial animation to make them look more tired/hurt.
  • Shaders! In 3D you can use many complex shaders, to render ‘photorealistic or non-photorealistic images (like cartoons)
  • You can also write shaders to do things like calculate/add motionblur!

Unfortunately: Capcom did none of these. Sure, a few of the above would have been somewhat revolutionary for the franchise, but as it stands, 3D characters add nothing to SF4, I believe they actually degrade the quality of the visuals.

EDIT: After playing more I have noticed that they are using IK (look IK) on just the head bone, shorter characters look up when a large character jumps in front of them.

posted by Chris at 12:15 PM  

Monday, February 2, 2009

First Transformers 2 Teaser Trailer!

It’s exciting that you can see some of our work already! Check out the teaser trailer, be sure to click [watch in high quality]

posted by Chris at 10:11 AM  

Wednesday, December 17, 2008

Kavan et al Have Done It!

Ladislav Kavan is presenting a paper entitled ‘Automatic Linearization of Nonlinear Skinning‘ at the 2009 Symposium on Interactive 3D Graphics and Games on skinning arbitrary deformations! Run over to his site and check it out. In my opinion, this is the holy grail of sorts. You rig any way you want, have complex deformation that can only solve at 1 frame an hour? No problem, bake a range of motion to pose-driven, procedurally placed, animated, and weighted joints. People, Kavan included, have been presenting papers in the past with systems somewhat like this, but nothing this polished and final. I have talked to him about this stuff in the past and it’s great to see the stuff he’s been working on and that it really is all I had hoped for!

This will change things.

posted by Chris at 12:16 PM  

Saturday, December 13, 2008

Quantic Dreams

This is what it looks like on the other side of the uncanny valley.

No longer working for Crytek, maybe I can comment on some industry related things without worrying that my opinions could be misconstrewn as those of my former employer.

EuroGamer visited Quantic Dream this week, the studio working on the game ‘Heavy Rain’, who’s founder, de Fondaumière, arrogantly proclaimed that there was ‘no longer an uncanny valley‘, and that there are ‘very, very few‘ real artists in the video game industry. (A real class act, no?)

So their article starts with “We can’t tell you how Heavy Rain looks, sounds or plays…”, which I find kind of ridiculous seeing as how the studio’s only real claim to fame right now is the hype of it’s co-founder who casually claims they have accomplished one of the most amazing visual feats in the history of computer graphics (in real-time no less!).

Across the world there are thousands of outstanding artists chasing this same dream, from Final Fantasy, to Polar Express and Beowulf; people have tried to cross the ‘uncanny valley’ for years, and are getting closer every day. At Christmas you will be treated to what is probably one of the closest attempts yet. (Digital Domain’s work in Benjamin Button)

Not really having any videos to back up the hyperbole, they gave the EuroGamer staff a laundry list of statistics about their production.

I have yet to see anything stunning to back up the talk, 8 months after making his statement about crossing the uncanny valley, they released this video, which was just not even close, to be frank.

It looks like they aren’t using performance capture. Without markers on the face this means they have to solve the facial animation from elsewhere, usually a seated actress who pretends to be saying lines that were said in the other full body capture session. There’s a reason why studios like Imageworks don’t do this, it’s hard to sync the two performances together. If they have accomplished what other’s have not, with much less hardware/technology, it means they have some of the best artists/animators out there, and I say hats off to them.

But with every image they do release, and every arrogant statement, it is digging the hole deeper. The sad thing is they could release of of the greatest interactive experiences yet, but their main claim is the most realistic cg humans yet to be seen, and if they fail at this, it will overshadow everything.

At least they know how their fellow ps3 devs over at Guerilla must have been feeling for a few years now.

posted by Chris at 6:53 AM  

Sunday, November 30, 2008

24″+ Monitor Panels in One Easy Table (TN/PVA/IPS)

The market is flooded with cheap ‘TN’ TFT panels. TN (twisted nematic) panels are terrible when it comes to reproducing color and have a very limited viewing angle. I used to have one and if I just slouched in my chair (or sat up too straight) the black level would change drastically. These panels are much cheaper to manufacture, so vendors have been flocking to them for years.

As artists, we need at least _decent_ color, even on our home machines. Because it can sometimes be difficult to determine the actual panel used in a diaplay, and because I care, I have compiled a list of > 24″ monitors and their panel type. I really would have liked to have seen this last week.

Product Panel Type Size HDMI Price
SAMSUNG 2433BW TN 24″
SAMSUNG T240HD TN 24″
SAMSUNG 2443BWX TN 24″
SAMSUNG 2443BW TN 24″
SAMSUNG 2493HM TN 24″
SAMSUNG 245BW TN 24″
SAMSUNG T260HD TN 26″
SAMSUNG 2693HM TN 26″
SAMSUNG 305T PVA 30″ NO 1100
SAMSUNG XL30 PVA / LED 30″ NO 3000
SAMSUNG SM2693HM TN 26″
NEC LCD2690WUXi IPS 26″ NO 1200
NEC LCD3090WQXi IPS 30″ NO 2200
NEC S2409W TN 24″
NEC 24WMGX3 TN 24″
DELL 2407WFP PVA 24″ YES
DELL 2408WFP PVA 24″ YES 517
DELL 3008WFP IPS 30″ YES 2000
DELL 3007WFP-HC IPS 30″ NO 1400
DELL 2709W PVA 27″ YES 900
DELL S2409W TN 24″
EIZO SX3031W PVA 30″ NO 3000
EIZO SX2761W PVA 27″ NO 2000
GATEWAY XHD3000 PVA 30″ YES 1000
HP W2408C TN 24″
HP W2558HC TN 26″
HP LP3065 IPS 30″ NO 1260
LG W2600H-PF TN 26″
LG W3000H-Bn IPS 30″ NO 1240
LG W2452T TN 24″
VIEWSONIC VP2650WB TN 26″
VIEWSONIC VA2626WM TN 26″
VIEWSONIC VX2835WM TN 28″
posted by Chris at 7:25 AM  

Saturday, November 29, 2008

Best Buy? I think not!

I really needed a stick of 800mhz DDR2. There’s a Best Buy somewhat close to here so I went over. When I get there, I see they have one stick of Kingston Value RAM, however it’s 145 DOLLARS. Thinking this was clearly a typo, I headed to the ‘Geek Squad’ guy at the register who scanned it and told me:

Nope, that’s how much this kind of RAM costs, it’s really a special kind‘. (yeah ‘value’)

I replied that it certainly was not. That is should be under 50 bucks ‘at any store’, he then laughed and told me that they match prices, but not ‘online only stores‘, to which I replied: ‘name a store, any store and that’s the price I will use‘. He said Fry’s (a popular brick and mortar store in Palo Alto) and we pulled up the website. The same RAM was 33 DOLLARS! Not on sale; nothing.

He called the manager, who came and said they couldn’t price match with a difference that large. I leveled with them… ‘Guys, look, it’s one stick of ‘value’ RAM. My PC is broken. I rode my bike here. Fry’s is in Palo Alto. I would pay double what it is at Frys, I am not tryin to rip you off, but I will not, on principle, bend over and take it like this; five times normal retail price is ridiculous!

The manager, seeing people behind me, started to talk down to me ‘We aren’t ripping you off, you are trying to price match to another store’s black friday ad! we only price match to real, non-sale prices!

I said ‘Look, it’s not a sale item, your own guy brought it up, name any store, where will you price match to?‘ He thought for a min ‘Central Computers, on Howard..‘ (they are not a chain and it would probably be more expensive..) ‘Ok, pull that up!’ We pulled the site up and the ram was 34 DOLLARS!

He turned to me quietly: ‘50 is as low as we can go.‘   ‘Sold!

I used to think Best Buy was decent, when you needed a component, if they had it, why go anywhere else? They are such a large chain that they can really discount items because they purchase in bulk. Like I said, this is what I used to think… While I have been in Germany the past 4 years apparently things have changed.

Have any of you seen anything this bad? Charging $145 for something all other retailers have for < $35 is just wrong. It irks me that they pay these ‘geeks’ in their ‘squad’ to tell people lies from behind this knowledgeable facade.

posted by Chris at 7:50 AM  

Sunday, November 16, 2008

Change of Venue

I am now living in San Francisco! My last day at Crytek was October 31st, and it was pretty difficult for me as it is one of the best companies I have ever worked for. I have so much respect for all the guys that helped constantly push the envelope and make Crytek the renowned world player that it is today.

I started last week as a Creature TD at Industrial Light + Magic; about the only thing that could wrench me away from Frankfurt. I have always been so interested in creatures and anatomy, and, from a young age, considered ILM the best of the best when it came to these. I feel very lucky to be able to join another great team of people, and not only that, but learn so much from them on a daily basis.

I don’t know what effect that will have on this blog. I can continue to comment on games stuff, but, being a large company ILM is a lot more restrictive in what I can do (even in my spare time!) versus Crytek. Not to mention I will be very, very busy the next few months.

posted by Chris at 9:19 AM  

Sunday, October 26, 2008

Weekend Python Snippet- IsItThere.py (Pt. 2)

So, before we looked at just outputting a list of the files that were on device1 and not device2, now I will copy the files to a folder on the main device.

The tricky thing about this is I want the directory structure intact. After looking as os.path, and pywin32, I didn’t see anything like ‘mkdir’ where it would make all the folders deep needed to recreate the branch that a file was in. I did however find a function online:

def mkdir(newdir):
    if os.path.isdir(newdir):
        pass
    elif os.path.isfile(newdir):
        raise OSError("a file with the same name as the desired " \
                      "dir, '%s', already exists." % newdir)
    else:
        head, tail = os.path.split(newdir)
        if head and not os.path.isdir(head):
            mkdir(head)
        if tail:
            os.mkdir(newdir)

To copy the files and create the directories, I altered the previous script a bit:

for (path, dirs, files) in os.walk(path2):
	for file in files:
		if os.path.basename(file) not in filenames:
			newPath = os.path.abspath(os.path.join(path,file)).replace(path2,(path1 + 'isItThere//'))
			fileFull = os.path.abspath(os.path.join(path,file))
			print fileFull + " not found in " + path1 + " file cloud"
			print "Copying " + fileFull + " >>> " + newPath
			if os.path.isdir(os.path.dirname(newPath)) == False:
				mkdir(os.path.dirname(newPath))
			win32file.CopyFile (fileFull, newPath, 0))

The results printed should look like below, the files should have been copied accordingly and the directories created.

U:\photos\Crystal\Orlando - Lauras wedding\P0003270.jpg not found in D:\photos\Crystal\ file cloud
Copying U:\photos\Crystal\Orlando - Lauras wedding\P0003270.jpg >>> D:\photos\Crystal\isItThere\Orlando - Lauras wedding\P0003270.jpg
U:\photos\Crystal\Orlando - Lauras wedding\P0003271.jpg not found in D:\photos\Crystal\ file cloud
Copying U:\photos\Crystal\Orlando - Lauras wedding\P0003271.jpg >>> D:\photos\Crystal\isItThere\Orlando - Lauras wedding\P0003271.jpg
U:\photos\Crystal\Orlando - Lauras wedding\P0003272.jpg not found in D:\photos\Crystal\ file cloud

If I had time, or perhaps when I have time, I’ll add MD5 checks.

posted by Chris at 2:54 PM  

Saturday, October 25, 2008

Weekend Python Snippet- IsItThere.py (Pt. 1)

I am anal-retentive about data retention. There, I said it. There are many times when I find myself in the situation of having two storage devices, that may or may not have duplicate files. I then want to erase one, but do I have all those files backed up?

I use two existing programs to aid me in my anal-retentivity: TerraCopy and WinMerge. Terracopy replaces the windows default copy with something much better (can hash check files when they are copied, etc). With WinMerge, I can right click a folder and say ‘Compare To…’ then right click another and say ‘Compare’. This tells me any differences between the two file/folder trees.

However, here’s an example I have not yet found a good solution for:

I want to erase a camera card I have, I am pretty certain I copied the images off –but how can I be sure! I took those images and sorted them into folders by location or date taken.

So I wrote a small and I am sure inefficient python script to help:

import os
 
filenames = []
 
i = 1
 
path1 = 'D://photos//south america//'
path2 = 'N://DCIM//100ND300//'
 
if os.path.isdir(path1):
	if os.path.isdir(path2):
		print "creating index.."
 
for (path, dirs, files) in os.walk(path1):
	for file in files:
		filenames.append(os.path.basename(file))
 
for (path, dirs, files) in os.walk(path2):
	for file in files:
		if os.path.basename(file) not in filenames:
			print os.path.abspath(os.path.join(path,file)) + ' not found in ' + path1 + ' file cloud'

This will print something like this:

N:/DCIM/100ND300/image.NEF not found in D:/photos/south america/ file cloud

I don’t use python that often at all, please lemme know if there’s a better way to be doing this.

posted by Chris at 6:01 PM  

Friday, October 24, 2008

Autodesk Acquires Softimage for 35 Million

Really? Wow, I mean this isn’t as surprising as when they bought Alias 3 years ago (182 Million), but still. And 35 million? That’s the price of a single movie or three year videogame production these days. I thought the ‘desk bought Maya to kill it, but it’s still around… Wonder how long XSI will be around now.

http://usa.autodesk.com/adsk/servlet/item?id=12022457&siteID=123112

Maybe they will merge all three teams into one highly experienced, ‘all-star’ development team to make a new 3d app to end all 3d apps.

Aren’t there laws about these kinds of monopolies? Looks like the Lightwave and C43D guys are your only alternative..

One less booth at SIGGRAPH..

posted by Chris at 12:50 PM  

Sunday, October 19, 2008

Epic Pipeline Presentation

I saw this presentation about a year ago, talking about the pipeline Epic uses on their games. Maybe some interesting stuff for others here. The images are larger, you can right click -> view image to see a larger version.

45 days or more to create a single character… wow.

They don’t use polycruncher to generate LODs, they do this by hand. They just use it to import the mesh into max in a usable form from mudbox/zbrush.

They don’t care so much about intersecting meshes when making the high res, as it’s just used to derive the nMap, not RP a statue or anything.

They said they only use DeepUV for it’s ‘relax’ feature. They make extensive use of the 3DS Max ‘render to texture’ as well.

Their UT07 characters are highly customizable. Individual armor parts can be added or removed, or even modded. Their UV maps are broken down into set sections that can be generated on the fly. So there are still 2×2048 maps but all the maps can be very different. This is something I have also seen in WoW and other games.

They mentioned many times how they use COLLADA heavily to go between DCC apps.

They share a lot of common components accross characters

posted by Chris at 4:44 PM  

Wednesday, September 17, 2008

Making of the Image Metrics ‘Emily’ Tech Demo

I have seen some of the other material in the SIGGRAPH Image Metrics presskit posted online [Emily Tech Demo] [‘How To’ video], but not the video that shows the making of the Emily tech demo. So here’s that as well:

At the end, there’s a quote from Peter Plantec about how Image Metrics has finally ‘crossed the uncanny valley’, but seriously, am I the only one who thinks the shading is a bit off, and besides that, what’s the point of laying a duplicate of face directly on top of one in a video? Shouldn’t they have shown her talking in a different setting? Maybe showed how they can remap the animation to a different face? There is no reason not to just use the original plate in this example.

posted by Chris at 4:44 PM  
« Previous PageNext Page »

Powered by WordPress