Stumbling Toward 'Awesomeness'

A Technical Art Blog

Tuesday, June 6, 2017

Crysis Technologies

In 2006, the team at Crytek was hard at work trying to come up with ways to ship Crysis –we had definitely bitten off more than we could chew. I found some videos that are now a decade old when cleaning out my HD, but it’s interesting to see –that’s for sure!

Facial Editor

We didn’t have a clue how we would animate all the lines of dialog that were required for the game.  3D Studio Max, which we used to animate at the time had nothing as a means of animating faces, now a decade later, Max and Maya still have zero offering to help with facial rigging or animation.  So.. we decided to write our own. Stephen Bender (Animation Lead) and I worked with Timur Davidenko and Michael Smith (Programmers) on this tool. Marco Corbetta wrote the 2.5D head/facial tracker. Here’s a video:

 

The user would feed into the system a text file, an audio file, and a webcam video of themselves. It would generate the mouth phonemes from the text/audio, and upper 2/3rds of the face from the video. The system would generate this animation on the same interface the animators used to animate so it was easily editable. It shipped with the MS speech DLL, but you could swap that for Annosoft if you licensed it. Crysis shipped with all characters having 98 blendshapes, driven by Facial Editor curves/animation using non-linear expressions. Imagine shipping a game today without having animators touch a face in a DCC app!

SequencePane

click to enlarge

PhotoBump

Many people know that Crytek released the first commercially available normal map generator: PolyBump, but rarely has anyone heard of it’s companion: PhotoBump. This was Created by Marco Corbetta around the same time, but released only to CryEngine licensees in 2005. It was probably one of the first commercial photogrammetry apps, and definitely one of the first uses of photogrammetry in games. Much of the rocky terrain in Crysis was created with the help of  PhotoBump! Marco also stamped/derived high frequency details from the diffuse, which I hadn’t seen others do until sometime after.

SIGGRAPH Best Realtime Graphics 2007

Here’s the SIGGRAPH ET reel from the year we released Crysis. I still can’t believe some of this stuff, like the guy pathfinding across the bridge of constrained boards and pieces of rope! I actually cut and edited this video myself back then, rendering it all out from the engine as well!

posted by Chris at 10:25 PM  

Monday, November 21, 2016

The Eyes

ct_eyes

Windows to the Soul

In CG, the eyes are unforgiving.  For thousands of generations we have had to decipher true intent of other human beings from their eyes, because of this, we have evolved to notice the most minute, 1mm shift in eye shape. If there was one part of a digital character that was one of the most difficult to simulate and render, it would be this. Let’s talk about it.

Talking About the Eyes

anat

First, I am going to go over some eye terminology. There’s a lot out there, but I am only going to go over what is required to give meaningful feedback in dailies. :D  You don’t need to be an ophthalmologist to discuss why a character’s eyes don’t look right.

The Iris – This is the round, colored circle in the eye, it encompasses the pupil, but it is not the pupil.
The Pupil – The black dot, or the hole in the iris.
The Sclera – The ‘whites of your eyes’
The Cornea – This is the bulge over the iris

Vergence – The eyes converge or turn inward to aim at the object a person is gazing at. They diverge, or turn outward when tracking something that is receding, if they diverge more than parallel, you can say the person is ‘walleyed’ (see: strabismus).

KEY TAKEAWAY – When viewing and reviewing eyes, it’s important to notice how the lids break across the iris and the shape of the negative space the lids and the iris create in the sclera. When we look at someone we are mainly seeing this shape. For more advanced readers, I picked the image above because you can really see the characteristics of the wetness meniscus, and eyelash refraction, but we’ll talk about that later.

Eye Placement

Initial eye placement is very important. Eyes too large or placed improperly will not rotate accurately when set in the face. There’s a lot you can learn by just looking at yourself in a mirror, looking at a friend, checking or scouring youtube. There’s even more you can learn from reading forensic facial reconstruction textbooks!

There are a few books for forensic artists, and most have information about placing eyes for facial re-construction. In my previous post about the jaw, I talked about forensic facial reconstruction a bit, these are my go-to books when it comes to eye (and teeth) placement:

books
Face It: A Visual Reference for Multi-ethnic Facial Modeling
Forensic Art and Illustration
Forensic Analysis of the Skull
Facial Geometry

I use a Mary Kay Travel Mirror and have bought them for all riggers/animators on my teams, at six bucks you can’t go wrong.

If you were to draw a line from the upper and lower orbits of the eye socket, it would be in line with the back of the cornea. I don’t usually like to point to skeletal reference, but in this case the orbits are relatively bony parts of the face that can be seen in surface anatomy.

placement01

Here are some good anatomical images for centering the eye in the socket (click to enlarge):

tmp696744_thumb    tg-7-57c-modified    f2

And in practice; here’s eye placement from Marius, the hero Character in Ryse the video game. Notice that the eyeball doesn’t even cover the entire ocular opening when viewed front on in wireframe, as the with anatomical images above

Marius, Ryse 2011, Crytek

Abdenour Bachir, Ryse 2011, Crytek (click to enlarge)

eye_blog

Lastly, here is a GIF I made from a video tutorial called ‘How to Paint the Human Eye‘ by Cat Reyto. It shows eye placement rather well.

So this has all been discussing eye placement in the socket, but what about eye socket placement in the face?  This is where proportions come into play. The books above, “A Visual Reference to Multi-Ethnic Facial Modeling” and “Facial Geometry” are very good at discussing facial proportions and showing you how those proportions can change based on ethnicity. Here is a page from “Facial Geometry” that shows the basic facial proportions, most importantly pupil and eye placement relative to the rest of the face:

face_prop

Here’s a page from the other book, which actually discusses 3D modeling. This is the chapter on eye placement, anyone interested in facial modeling should pick this up, it’s a great full color reference:

book_multiejpg

In my post about the jaw, I discussed jaw placement relative to the eyes and pupils, you can do the inverse: check the eye placement relative to teeth that you feel happy with. Notice that there’s a correlation between the molars or width of the upper teeth and the pupils.

face05face01

Sometimes art direction would like ‘larger’ eyes, this is sometimes attempted by making the eyeball larger, but then it can feel weird when the eye is too large for the socket. This causes issues with the rotations of the eye. For reference, take a look at people who have been handed down a specific neanderthal gene for large eyes (or Axenfeld-Rieger syndrome), like the Ukrainian model Masha Tyelna. She has large eyeballs, but also the facial physiology to accept them:

eyes_Masha_Tyelna4000000110309-masha_tyelna-fiteyes_Masha_Tyelna3

Eye Movement

eye_rot

Range of Movement

Making digital humans, often from fiction, I find that these numbers are all relative, but the book Three Dimensional Rotations of the Eye, has a some good information, as does this chapter of another book: Physiology of the Ocular Movements. From default pose I usually fond that each eye can rotate on the horizontal plane 40 degrees in each direction (right and left) and 30 degrees in each direction for the vertical plane (up and down).

Eye ‘Accommodation’ and Convergence
For the purpose of rigging, the eyes are at maximum divergence (or parallel) at about 1.5m or 6 feet. I have not come across the eye gaze distance that results in maximum divergence, if you happen to have that information; let me know!

Pupils dilate to focus on a near object, this is known as accommodation. A standard young person’s eye can gaze/focus on objects from infinity to 6.5cm from the eyes. In action, eyes cannot converge/diverge faster than about 25 degrees a second.

“Why Does My Character Look Cross-Eyed”?

So, take a look at the MRI at the top of this page. Do you notice that the eyes are not looking forward or parallel? It’s because the eyes aren’t parallel when looking parallel/at an infinite distance. Human eyes bow out a few degrees, the amount of degrees varies between 4 and 6, per eye. The amount ‘off’ an eyeball is bowed outward is called the ‘kappa angle’, check this diagram below:

getimage

Let’s go over some more terminology:

Pupillary Axis – A line drawn straight out the pupil.
Visual Axis – A line from the fovea to the fixation target or the item being gazed upon.
Angle Kappa – the angle between the pupillary and visual axis.

The Hirschberg Test

The way ophthalmologists determine if eyes are converging properly is with the Hirschberg test. It’s a simple test where they ask a child to look at a teddy bear, and they shine a pen light into their eye, they look at his this light source reflects off the cornea, and they can tell if the child has a lazy eye/issue converging his eyes on a point.

hberg

middleeastafrjophthalmol_2015_22_3_265_159691_u6Doing a ‘CG Hirschberg’ test requires a renderer with accurate reflections. You place a point light directly behind the camera and have the rig fixate/gaze directly onto the camera.

For more information on Angle Kappa, as well as data on the average angle across groups of people, check out Pablo Artal’s blog posts.

KEY TAKEAWAY

Even though the eyes bow out a little bit, for all intents and purposes, as you see in the Hirschberg, the pupil still seems like it is looking at the person. So yes, the pupillary axis is off by a few degrees, but with the refraction it’s not too noticeable (<1mm reflection offset from the center of the cornea). An Angle Kappa offset should be built into your rigs, you should not have the pupillary axis converging on the fixation target, this is what causes CG characters to look cross-eyed when viewed with scrutiny.

“Why Don’t My Eyes Feel Real?”

untitled-8

Hanno Hagedorn, Crysis 2005, Crytek

A difficult issue with eyes in computer graphics is making them feel like they are really set in the face.

The eye has a very interesting soft transition where the sclera meets the lids.  Some of this is ambient occlusion and shadows from the brow and lids, and some of it is from reflection of the upper eyelashes.

In 2005, Hanno Hagedorn and I were working on Crysis, we were having an issue with eyes that I called ‘game eye’. It’s where the sclera, has a sharp contrast with the skin.  In 2005, there was no obvious solution to do this in realtime.

We (I still credit Hanno) solved this in a very pragmatic way, the following is from one of our slides at GDC 2007:

overlay1

We created an ‘eye overlay’, a thin film that sat on the eye and deformed with the fleshy eye deformation. A lot of games today, including Paragon, my current project at Epic, use this technique. Many years later, a famous VFX company actually tried to patent the technique.

Here are those same meshes, six years later on Ryse: Son of Rome:

eye_meshes

Wetness Meniscus

One thing added since Crysis is the tearline or wetness meniscus. This is very important, it’s purpose of this is to kick up small spec highlights and fake the area where the lid meets the sclera. You can see this line in photo of the female eye where I have outlined initial anatomical terms.

overlay_wetness_lashes

Eye Rendering

All of these eye parts are not easy to shade correctly, Nicolas Schulz describes how Crytek implemented a forward pass in their deferred renderer to deal with eyes in his 2014 paper The Rendering Technology of Ryse. Nicholas has two slides dedicated to eye shading:

eye_rendering

The eye shader used on Ryse is documented in depth here [docs.cryengine.com], the modified HLSL CFX shader code is here [github].   Some important features of the shader include:

  • Cornea refraction and scattering – this is important, it simulates the refraction of the liquid in the cornea
  • Iris color, depth, self shadowing, and SSS – Since there is no physical iris, we need to fake that there is a physical form there using a displacement map (POM)
  • Eye occlusion overlay depth bias – this is very important, it allows you to push/pull the depth of the overlay film, so that the eye doesn’t penetrate through it
  • Sclera SSS – not to be overlooked, or else the eye looks like a golfball.

eye_ball_textures

These were the images that fed into the shader features above, and below are the overlay spec and AO masks, and the spec texture for the wetness/tearline.

eye_occlusioneye_water_spec

This is an example of the iris displacement map ingame seen with a debug render view (Xbox One, 2011)

Abdenour Bachir, Ryse, Crytek 2011

Does the eye really need a corneal bulge?

On Ryse and Crysis, we had spherical eyes, and faked the corneal bulge with a shader and fleshy eye deformation. A corneal bulge means that you really have to have your shit together when it comes to the eye deformation. All those layers mentioned above have to deform with the bulge and it can be a lot to manage across all the deformation contexts of the eye (blink directionals, squint directionals, etc) On Paragon, we have non-spherical eyes, and it’s very challenging to deform the eyes properly in the context of a 60hz e-sports title with only one joint per lid. It’s actually pretty much impossible.

Altogether

Below, you see the eyes in Ryse: Son of Rome, they feel set in the face, and consistent with the world and hyper-real style of the game. (in-game mesh and rig, click to enlarge)

Abdenour Bachir, Ryse 2012, Crytek

Abdenour Bachir, Ryse 2012, Crytek

Scanning/Acquiring Eyes

capture

When scanning a character’s head it’s important to get their eyes fixed at a gaze distance you have recorded. I also use the scan to see how the iris breaks across the lid and infer information about how the eye will be set in the face and skull.

  • If you want to take your own eye texture reference, shoot through a ring or attach a light co-axial with the camera lens, try to get the highlight in the pupil as you will discard this part of the image anyway.
  • Because the eye is shiny and it’s characteristics change as you move around it, photogrammetry often falls flat.
  • Because the eye is a complex translucent lens, cutting spec with cross-polarization often also falls flat. Light loses it’s polarization when bouncing around in there.

When it comes to scanning the eyes themselves, Disney Research has published an interesting paper on the High Quality Capture of Eyes.

How You Review Eyes

First off you should check eyeball depth and placement, as shown above.

  • You should be reviewing eyes with at least the FOV of a portrait lens: 80mm or a 25 degree FOV. When getting ‘all up in there’ I often use a 10 degree FOV.
  • Try to review them at the distance you will see them in your shipping product.
  • If you are embodying the person that is the fixation point of the digital human, you really need runtime look IK (for the eyes, but preferably feathered torso>head>neck>eyes). From some distance, you can tell is someone is looking at your eyes or your ear. Think about that. The slightest anim compression or issue in any joint from root to eyes can cause the gaze to be off a few degrees and that’s all it takes.
  • If you’re doing a lot of work, build a debug view into your software that draws the pupillary axis and visual axis, all the way to the fixation/gaze point
posted by Chris at 2:01 AM  

Tuesday, August 26, 2014

Multi-Resolution Facial Rigging

At SIGGRAPH we discussed a bit about our facial pipeline that we haven’t talked about before. Namely, facial LODs and multi-platform facial rigging.

I would like to start by saying that we spent a _LOT_ of time thinking about facial levels of detail on Ryse, and put a lot of effort into the area. I know this is a long post, but it’s an important one.

run_on_brian

Lowest Common Denominator

As the ‘next-generation’ seems to be largely defined by mult-platform titles it seems valuable to focus on ways to increase fidelity on next generation hardware while still being able to target older hardware specs. That said, I have yet to see a pipeline to do this. Most next gen games have skeletons and animations limited by the lowest common denominator of the last generation, often Playstation 3.

When you wonder why your awesome next gen game doesn’t seem to have character models and animation like next-gen only titles, this is why.

It’s very easy to increase texture resolution by having a pipeline where you author high and bake to lower targets.  It’s more complicated to author meshes high and publish to lower targets, we did this on Crysis 1 and 2, High end PC saw higher mesh resolution characters than Xbox 360. I would say it’s the hardest to make rigs, deformers, and animations for a high spec hardware target and create a process to publish lower fidelity versions. No one wants to have different character skeletons on each hardware platform.

facial_complexity

You Deserve an Explanation

When we released the specs of our faces, people understandably were a bit taken aback.  Why on earth would you need 250 blendshapes if you have 260 joints? This above is actually a slide from our asset creation SIGGRAPH course that resonated very well with the audience.

Let’s take a look at some goals:

  1. Cut-scene fidelity in gameplay at any time- no cut-scene rigs
  2. Up to 70 characters on screen
  3. Able to run on multiple hardware specs

The only way to achieve the first two is through a very aggressive and granular level of detail (LOD) scheme. Once having that LOD system in place, the third item will come for free, as it did on our previous titles. However, as we have LODed meshes and materials, we had never LODed rigs.

On a feature film, we wouldn’t use joints, we would have a largely blendshape-only face.

But this doesn’t LOD well, we need to be able to strip out facial complexity in the distance and on other platforms.

Facial Level of Detail

So to achieve these goals, we must aggressively LOD our character faces.

Let’s generate some new goals:

  • Improve LOD system to allow the swapping or culling of skinned meshes per-mesh, each at hand-tailored distances per-character instance
  • Not only swap meshes, but skinning algorithms, materials, cull blendshapes, etc..
  • One skeleton – all levels of detail stored in one nested hierarchy, disable/reveal joints at different LOD levels, as I mention above, no one wants multiple skeletons
  • One animation set – drives all layers of detail because it’s one hierarchy, only the enabled joints receive animation
  • All facial animations shareable between characters
  • Faces snapped onto bodies at runtime – “Cry parent constraint” of sorts snaps head, neck, spine4, clavs, and upper arms of facial rig to body, allowing dynamic LODing of face irrespective of body.

LOD_hierarchy

One Hierarchy to Rule them All

Before going into the meshes, skinning algorithms, culling, etc.. it’s important to understand the hierarchy of the face. At any given mesh LOD level, there are many joints that are not skinned. Above you see three layers of joints, 9 LOD0, 3 LOD1, and 1 LOD2.

To use a single hierarchy, but have it drive meshes at different levels, you need to accomplish the following:

  • Make sure you have three layers that can drive different facial LODs, we had something like 260/70/15 on Ryse.
  • Each layer must be driven, and able to deform that LOD alone. Meaning when creating rig logic, you must start from the highest LOD and move down the chain. The LOD0 joints above would only be responsible for skinning the details of the face at LOD0, their gross movement comes from their parent, LOD1.

Here you can see the Marius example video from our slides. Notice the ORANGE joints are responsible for gross movement and the YELLOW or GREEN leaf joints just add detail.

jaw_drop_skel

 

Why blendshapes? Isn’t 260 joints enough?

The facial hierarchy and rig is consistent between all characters. The rig logic that drives those joints is changed and tweaked, the skinning is tweaked, but no two faces are identical. the blendshapes serve two main purposes:

1) Get the joint rig back on scan. Whatever the delta from the joint rig to the scan data that is associated with that solved pose from the headcam data, bridge that. This means fat around Nero’s neck, bags under his eyes, his eyebrow region, etc.

2) Add volume where it’s lost due to joint skinning. Areas like the lips, the cheeks, rig functions like lips together, sticky lips, etc, require blendshapes.

nero_corectives

Look at the image above, there just aren’t enough joints in the brow to catch that micro-expression on Nero’s face. It comes through with the blendshape, and he goes from looking like you kicked his dog, to his accusatory surprise when he figures out that you are Damoclese.

A Look Under the Hood: Ryse Facial LODing

Thanks to the hard work of graphics engineer Jerome Charles we were able to granularly LOD our faces. These values are from your buddy Vitallion, who was a hero and could be a bit less aggressive. Many of the barbarians you fight en masse blew through all their blendshapes in 2m not 4m.

Assets / Technologies (LOD)
Distance
CPU skinning, 8 inf, 260 joints, 230 blendshapes, tangent update, 5k  tris across multiple meshes 0-4m
CPU skinning, 8 inf, 260 joints, 3-5k across multiple meshes with small face parts culled 4-7m
GPU skinning, 4 inf, 70 joints, 2k mesh with integrated eyes 7-10m
GPU skinning , 4 inf, <10 joints, <1k mesh 10m+

 

Here’s a different table showing the face mesh parts that we culled and when:

Distance Face parts
4m Eyebrow meshes replaced, baked into facial texture
3m Eyelash geometry culled
3m Eye AO ‘overlay’ layer culled
4m Eye balls removed, replaced with baked in eyes in head mesh
2m Eye ‘water’ miniscus culled
3m Eye tearduct culled
3m Teeth swapped for built-in mesh
3m Tongue swapped for built-in mesh

Why isn’t this standard?

Because it’s very difficult and very complicated, there aren’t so many people in that can pull something like this off. On Ryse we partnered with my friend Vlad at 3Lateral, after 4 months working on the initial Marius facial prototype, he and his team were able to deliver 23 more facial rigs at the same fidelity in just under three months!

But also, there’s the whole discussion about whether the time and effort spent on that last 5% really pays off in the end. Why not just use PS3 facial rigs on all platforms and spend a little more on marketing? It’s happening! And those guys probably aren’t going bankrupt any time soon..  ¬.¬

I am insanely proud of what the team accomplished on Ryse. Facial rigging is nothing without a great bunch of artists, programmers, animators, etc. Here’s some good moments where the performances really come through, these are all the in-game meshes and rigs:

DISCLAIMER: All of the info above and more is publicly available in our SIGGRAPH 2014 course notes.

posted by Chris at 4:40 AM  

Sunday, August 10, 2014

RYSE AT SIGGRAPH 2014

ryse_sigg

Crytek has won the SIGGRAPH 2014 award for ‘Best Real-Time Graphics’ with Ryse: Son of Rome, check it out in the Electronic Theater or Computer Animation Festival this week at SIGGRAPH.

We are also giving multiple talks:

I will be speaking in the asset production talk, as well as Sascha Herfort and Lars Martinsson. It’s also the first course we have done at Crytek where the entire course is devoted to one of our projects and we have 50+ pages of coursenotes going into the ACM digital library.

posted by Chris at 12:54 AM  

Saturday, August 24, 2013

Ryse at the Anaheim Autodesk User Event

I have been working on Ryse for almost two years now, it’s one of the most amazing projects I have had the chance to work on. The team we have assembled is just amazing, and it’s great to be in the position to show people what games can look like on next-gen hardware..  Autodesk asked us to come out to Anaheim and talk about some of the pipeline work we have been doing, and it’s great to finally be able to share some of the this stuff.

A lot of people have been asking about the fidelity, like ‘where are all those polygons?’, if you look at the video, you will see that the regular Romans, they actually have leather ties modeled that deform with the movement of the plates, and something that might never be noticed: deforming leather straps underneath the plates modeled/rigged holding together every piece of Lorica Segmata armor, and underneath that: a red tunic! Ryse is a labor of love!

We’re all working pretty hard right now, but it’s the kind of ‘pixel fucking’ that makes great art -we’re really polishing, and having a blast. We hope the characters and world we have created knock your socks off in November.

posted by Chris at 11:16 PM  

Friday, January 18, 2013

Moving to ‘Physically-Based’ Shading

damo_engine

At the SIGGRAPH Autodesk User Group we spoke a lot about our character technology and switch to Maya. One area that we haven’t spoken so much about is next-gen updates to our shading and material pipeline, however Nicolas and I have an interview out in Making Games where we talk about that in detail publicly for the first time, so I can mention it here. One of the reasons we have really focused on character technology is that it touches so many departments and is a very difficult issue to crack, at Crytek we have a strong history of lighting and rendering.

What is ‘Physically-Based’ Shading?

The first time I ever encountered a physically-based pipeline was when working at ILM. The guys had gotten tired of having to create different light setups and materials per shot or per sequence. Moving to a more physically-based shading model would mean that we could not waste so much time re-lighting and tweaking materials, but also get a more natural, better initial result -quicker. [Ben Snow’s 2010 PBR SIGGRAPH Course Slides]

WHAT IS MEANT BY ‘PHYSICAL’

http://myphysicswebschool.blogspot.de/

image credit: http://myphysicswebschool.blogspot.de/

A physically based shading model reacts much more like real world light simulation, one of the biggest differences is that the amount of reflected light can never be more than the incoming amount that hit the surface, older lighting models tended to have overly bright and overly broad specular highlights. With the Lambert/Blinn-Phong model it was possible to have many situations where a material emitted more light than it received. An interesting caveat of physically-based shading is that the user no longer has control over the specular response (more under ‘Difficult Transition’ below). Because the way light behaves is much more realistic and natural, materials authored for this shading model work equally well in all lighting environments.

Geek Stuff:‘Energy conservation’ is a term that you might hear often used in conjunction with physically-based lighting, here’s a quote from the SIGGRAPH ’96 course notes that I always thought was a perfect explanation of reflected diffuse and specular energy:

“When light hits an object, the energy is reflected as one of two components; the specular component (the shiny highlight) and the diffuse (the color of the object). The relationship of these two components is what defines what kind of material the object is. These two kinds of energy make up the 100% of light reflected off an object. If 95% of it is diffuse energy, then the remaining 5% is specular energy. When the specularity increases, the diffuse component drops, and vice versa. A ping pong ball is considered to be a very diffuse object, with very little specularity and lots of diffuse, and a mirror is thought of as having a very high specularity, and almost no diffuse.”

PHYSICALLY- PLAUSIBLE

It’s important to understand that everything is a hack, whether it’s V-Ray or a game engine, we are just talking about different levels of hackery. Game engines often take the cake for approximations and hacks, one of my guys once said ‘Some people just remove spec maps from their pipeline and all the sudden they’re ‘physically-based”. It’s not just the way our renderers simulate light that is an approximation, but it’s important to remember that we feed the shading model with physically plausible data as well, when you make an asset, you are making a material that is trying to mimic certain physical characteristics.

DIFFICULT TRANSITION

Once physics get involved, you can cheat much less, and in film we cheeeeeaaat. Big time. Ben Snow, the VFX Supe who ushered in the change to a physically-based pipeline at ILM was quoted in VFXPro as saying: “The move to the new [pipeline] did spark somewhat of a holy war at ILM.” I mentioned before that the artist loses control of the specular response, in general, artists don’t like losing control, or adopting new ways of doing things.

WHY IT IS IMPORTANT FOR GAMES AND REAL-TIME RENDERING

Aside from the more natural lighting and rendering, in an environment where the player determines the camera, and often the lighting, it’s important that materials work under all possible lighting scenarios. As the product Manager of Cinebox, I was constantly having our renderer compared to Mental Ray, PRMAN and others, the team added BRDF support and paved the way for physically-based rendering which we hope to ship in 2013 with Ryse.

microcompare05

General Overview for Artists

At Crytek, we have always added great rendering features, but never really took a hard focus on consistency in shading and lighting. Like ILM in my example above, we often tweaked materials for the lighting environment they were to be placed in.

GENERAL RULES / MATERIAL TYPES

Before we start talking about the different maps and material properties, you should know that in a physically-based pipeline you will have two slightly different workflows, one for metals, and one for non-metals. This is more about creating materials that have physically plausible values.

Metals:

  • The specular color for metal should always be above sRGB 180
  • Metal can have colored specular highlights (for gold and copper for example)
  • Metal has a black or very dark diffuse color, because metals absorb all light that enters underneath the surface, they have no ‘diffuse reflection’

Non-Metals:

  • Non-metal has monochrome/gray specular color. Never use colored specular for anything except certain metals
  • The sRGB color range for most non-metal materials is usually between 40 and 60. It should never be higher than 80/80/80
  • A good clean diffuse map is required

GLOSS

gloss_chart

At Crytek, we call the map that determines the roughness the ‘gloss map’, it’s actually the inverse roughness, but we found this easier to author. This is by far one of the most important maps as it determines the size and intensity of specular highlights, but also the contrast of the cube map reflection as you see above.  A good detail normal map can make a surface feel like it has a certain ‘roughness’, but you should start thinking about the gloss map as adding a ‘microscale roughness’. Look above at how as the roughness increases, as does the breadth of the specular highlight. Here is an example from our CryENGINE documentation that was written for Ryse:

click to enlarge

click to enlarge

click to enlarge

click to enlarge

DIFFUSE COLOR

Your diffuse map should be a texture with no lighting information at all. Think a light with a value of ‘100’ shining directly onto a polygon with your texture. There should be no shadow or AO information in your diffuse map. As stated above, a metal should have a completely black diffuse color.

Geek Stuff: Diffuse can also be reffered to as ‘albedo‘, the albedo is the measure of diffuse reflectivity. This term is primarily used to scare artists.

SPECULAR COLOR

As previously discussed, non-metals should only have monochrome/gray-scale specular color maps. Specular color is a real-world physical value and your map should be basically flat color, you should use existing values and not induce noise or variation, the spec color map is not a place to be artistic, it stores real-world values. You can find many tables online that have plausible color values for specular color, here is an example:

Material sRGB Color Linear (Blend Layer)
Water 38 38 38 0.02
Skin 51 51 51 0.03
Hair 65 65 65 0.05
Plastic / Glass (Low) 53 53 53 0.03
Plastic High 61 61 61 0.05
Glass (High) / Ruby 79 79 79 0.08
Diamond 115 115 115 0.17
Iron 196 199 199 0.57
Copper 250 209 194 N/A
Gold 255 219 145 N/A
Aluminum 245 245 247 0.91
Silver 250 247 242 N/A
If a non-metal material is not in the list, use a value between 45 and 65.

Geek Stuff: SPECULAR IS EVERYWHERE: In 2010, John Hable did a great post showing the specular characteristics of a cotton t-shirt and other materials that you wouldn’t usually consider having specular.

EXAMPLE ASSET:

Here you can see the maps that generate this worn, oxidized lion sculpture.

rust

click to enlarge

rust2

EXAMPLES IN AN ENVIRONMENT

640x

See above how there are no variations in the specular color map? See how the copper items on the left have a black diffuse texture? Notice there is no variation in the solid colors of the specular color maps.

SETTING UP PHOTOSHOP color_settings In order to create assets properly, we need to set up our content creation software properly, in this case: Photoshop. If you go to Edit>Color Settings… Set the dialog like the above. It’s important that you author textures in sRGB

Geek Stuff: We author in sRGB because it gives us more precision in darker colors, and reduces banding artifacts. The eye has 4.5 million cones that can perceive color, but 90 million rods that perceive luminance changes. Humans are much more perceptive to contrast changes than color changes!

Taking the Leap: Tips for Leads and Directors

New technologies that require paradigm shifts in how people work or how they think about reaching an end artistic result can be difficult to integrate into a pipeline. At Crytek I am the Lead/Director in charge of the team that is making that initial shift to physically-based lighting, I also lead the reference trip, and managed the hardware requests to get key artists on calibrated wide gamut display devices. I am just saying this to put the next items in some kind of context.

QUICK FEEDBACK AND ITERATION

It’s very important that your team be able to test their assets in multiple lighting conditions. The easiest route is to make a test level where you can cycle lighting conditions from many different game levels, or sampled lighting from multiple points in the game. The default light in this level should be broad daylight IMO, as it’s the hardest to get right.

USE EXAMPLE ASSETS

I created one of the first example assets for the physically based pipeline. It was a glass inlay table that I had at home, which had wooden, concrete (grout), metal, and multi-colored glass inlay. This asset served as a reference asset for the art team. Try to find an asset that can properly show the guys how to use gloss maps, IMO understanding how roughness effects your asset’s surface characteristics is maybe the biggest challenge when moving to a physically-based pipeline.

TRAIN KEY PERSONNEL

As with rolling out any new feature, you should train a few technically-inclined artists to help their peers along. It’s also good to have the artists give feedback to the graphics team as they begin really cutting their teeth on the system. On Ryse, we are doing the above, but also dedicating a single technical artist to helping with environment art-related technology and profiling.

CHEAT SHEET

It’s very important to have a ‘cheat sheet’, this is a sheet we created on the Ryse team to allow an artist to use the color picker to sample common ‘plausible’ values.

SPEC_Range_new.bmp

click to enlarge

HELP PEOPLE HELP THEMSELVES

We have created a debug view that highlights assets whose specular color was not in a physically-plausible range. We are very in favor of making tools to help people be responsible, and validate/highlight work that is not. We also allowed people to set solid specular values in the shader to limit memory consumption on simple assets.

CALIBRATION AND REFERENCE ACQUISITION

calibrate

Above are two things that I actually carry with me everywhere I go. The X-Rite ColorChecker Passport, and the Pantone Huey Pro monitor calibration toolset. Both are very small, and can be carried in a laptop bag. I will go into reference data acquisition in another post. On Ryse we significantly upgraded our reference acquisition pipeline and scanned a lot of objects/surfaces in the field.

 

TECHNICAL IMPROVEMENTS BASED ON PRODUCTION USE

Nicolas Shulz has presented many improvements made based on production use at GDC 2014. His slides are here. He details things like the importance of specular filtering on to preserve highlights as objects recede into the distance, and why we decided to couple normals and roughness.

UPDATE: We’ve now shipped Ryse, I have tried to update the post a little.  I was the invited speaker at HPG 2014, where I touched on this topic a bit and can now update this post with some details and images. (Tips for Leads and Directors) Nicolas also spoke at GDC 2014 and I have linked to his slides above. Though this post focuses on environments, in the end, with the amount of armor on characters, the PBR pipeline was really showcased everywhere. Here’s an image of multiple passes of Marius’ final armor:

marus_breackUp

click to enlarge

posted by Chris at 7:26 PM  

Monday, July 16, 2012

CINEBOX SIGGRAPH Talk and Studio Workshops

CRYENGINE CINEBOX

I am giving a talk at SIGGRAPH 2012 entitled ‘Film/Game Convergence: What’s Taking So Long?‘ where I discuss the inherent differences between games and film and go over a few case studies of projects that attempted to use a game engine for film previs. I also talk a bit about the development of our CINEBOX application, the decisions we had to make, and how we dealt with many of the issues previous attempts have run into.

STUDIO WORKSHOPS

I will be giving two more Studio Workshops this year, the first is a followup to last year’s Introduction to Python, entitled ‘Python Scripting in Maya‘. The other workshop is ‘Building a Game Level‘, which is the same basic workshop I gave last year where I show people how to make a playable game level in CryEngine in an hour. Studio Workshops are hands-on sessions where each attendee has a computer and follows along with the instructor. It’s a great chance for people of all ages to learn new things.

posted by admin at 8:06 PM  

Tuesday, July 19, 2011

SIGGRAPH 2011

I am volunteering again in the Studio; giving three small talks at SIGGRAPH, drop me a line if you will be in Vancouver.

Rigging Characters for CryENGINE

How to rig, skin, and export a character for CryENGINE 3. Topics include physics setup, building characters from many skinned meshes, and creating Character Definitions and Character Parameter files. These rigging basics are applicable to most run-time game engines.

Introduction to Python Scripting

In this introduction to Python, a powerful scripting language used by many 3D applications, attendees learn the basics and explore small example scenarios gleaned from actual game and film productions. The sessions are taught in a way that should empower attendees to immediately begin creating time-saving python scripts and applications.

World Creation in CryENGINE

Have you ever wanted to make a videogame? This session shows how to build a small level in the freely available CryENGINE 3 SDK. Topics include: world building and tools (FlowGraph, CryENGINE’s visual scripting language, and Trackview, the camera sequencing and directing tools). In less than an hour, attendees create their own playable video games.

posted by admin at 9:32 AM  

Powered by WordPress