Face Rendering in GLSL- GPU Programming -Sep, 2008Sangyoon Lee (sjames @ evl.uic.edu)Electronic Visualization LaboratoryUniversity of Illinois at Chicago |
I started with given face model for this project. Since our model is human head, I thought that it would be nice to have something related to human face. At the very beginning, I envisoned too much or too complex stuff but converged to human face animation at the end. Project requires two vertex shaders and three fragment shaders. I made many of very tiny shaders but as time goes by... all merged into couples.
In this project, I implemented human face animation with one of class approach, morph target (blend shape, relative vertex animation ...). All animation features are done totally on GPU except some data pre-processing. I will explain this later one.
Application developed and tested under Mac OSX 10.5 (Intel). Source code and binary is available below
GLSL_HEAD: glsl_head.tar.gz (Execution... just type ./glsl_head)
- Keyboard control
Shader mode: '1' default texture mapped head model, '2' morphing without texture, '3' morphing with texture, '4' compositor
Mouse control: move mouse to rotate head
Mode specific control
a. '1' default mode: textured simple head rendering. only mouse control to rotate head
b. '2' morph w/o texture: 'a' BigAhh morph, 's' smile, 'f' fear, 'c' change from female to male or male to female, 'd' morph color mode
c. '3' morph w/ texture:
same as w/o texture case except... there is not color mode key
uv unwarp morph: 'u' linear interpolation, 'i' delayed morphing , 'o' delayed morphing with sin factor applied, 'p' similar to 'o' but moving head
d. '4' compositor:
tile mode: 't'
arrow up - increase threshold (tile border thickness), arrow down - decrease threshold
arrow right - increase # of tiles, arrow left - decrease # of tiles
grid mode: 'y'
arrow right - increase # of grid, arrow left - decrease # of grid
reset setting: 'r' will set variables to default values
For the facial animaiton, I used model set from Sigular FaceGen software (www.facegen.com).This software can export various types of facial expression model as obj. I selected total 6 models including neutral shpae (female neutral, smile, phoneme 'a', fear and male type)
- Pre-Processing
Once six obj models ready, we only need one neutral shape model and 5 morph target (delta of vertex position and normal). I made a bit of modification in Bob's obj library code (http://www.evl.uic.edu/rlk/obj/obj.html) so that I can grab all necessary data set in flat file. This file only includes bunch of vector data (three float numbers). Then, I only need to load neutral obj model and import processed data into main application to utilize massive parallel GPU processing units (vertex shader) to morph face. Following shows some part of processed data. (it's just really numbers)
morph data (5 target) ver. 0.1
0.00000000 0.00000000 0.00000000
0.00000000 0.00000000 0.00000000
-0.20932007 0.24936986 -0.45115995
0.13677612 -0.53169322 0.14158303
0.00000000 0.00000000 0.00000000
0.00000000 0.00000000 0.00000000
-0.19096994 0.30954003 -0.48410988
0.11925307 -0.44979388 0.09264976
0.00000000 0.00000000 0.00000000
0.00000000 0.00000000 0.00000000
-0.19599009 0.31831002 -0.53158998
0.11557612 -0.38192818 0.15122503- Morph data loader
Again, this is small variation of Bob's code to embed morph target data into internal vertex buffer object. I will show more details in the following section.
- VBO
VBO is a big chunk of data to hold all necessary vertex data for graphics card. The neat approach of it is that it can increase GPU performance enormously by streaming it at once. Data will sit in video memory so no worry about bus bottleneck as long as those VBO fits to the capacity of video memory. Obj libray uses this very nicely. Here is the storage of such vertex data for GPU morph animaiton.
struct morph_vert
{
float n[3]; // vertex normal
float t[2]; // vertex texture coordinate
float v[3]; // vertex position
float mv0[3]; // morph target 0 vertex
float mn0[3]; // morph target 0 normal
float mv1[3]; // morph target 1 vertex
float mn1[3]; // morph target 1 normal
float mv2[3]; // morph target 2 vertex
float mn2[3]; // morph target 2 normal
float mv3[3]; // morph target 3 vertex
float mn3[3]; // morph target 3 normal
float mv4[3]; // morph target 4 vertex
float mn4[3]; // morph target 4 normal
};
- Per Vertex Attribute
There are several ways to manipulate vertex on GPU using shader. Per vertex attribute is one of the most simplest solution. It packs user defined data along with common vertex data set. One example we studied in class is to send attribute at the time we send vertex to GPU. Improvement of my work here is to pack this lots of numbers in VBO so that we can stream it instead that CPU send it one by one.
Following code snippet shows how to prepare this VBO.
// enable VBO Array
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableVertexAttribArray(vLoc[0]);
glEnableVertexAttribArray(nLoc[0]);
...
glEnableVertexAttribArray(vLoc[4]);
glEnableVertexAttribArray(nLoc[4]);
// Bind VBO
glBindBuffer(GL_ARRAY_BUFFER_ARB, gMorpher->vbo);
// assign address of each components
glNormalPointer ( GL_FLOAT, s, OFFSET(0));
glTexCoordPointer (2, GL_FLOAT, s, OFFSET(12));
glVertexPointer (3, GL_FLOAT, s, OFFSET(20));
glVertexAttribPointer(vLoc[0], 3, GL_FLOAT, 0, s, OFFSET(32));
glVertexAttribPointer(nLoc[0], 3, GL_FLOAT, 0, s, OFFSET(44));
...
glVertexAttribPointer(vLoc[4], 3, GL_FLOAT, 0, s, OFFSET(128));
glVertexAttribPointer(nLoc[4], 3, GL_FLOAT, 0, s, OFFSET(140));
Shader programming for morph animation is really simple. The first concept of this came from Siggraph 2003 and also published as part of GPU Gems. Originally their implementation is HLSL (Nvidia Cg). The source code is not available but it is fair enough to implement in GLSL from the scratch.
Basically morphing algorithm accumulates difference of vertex postiion from neutral to target shape based on weight on target shape. Normal is interpolated among them. Following shows a part of vertex shader code to compute position and normal.
// compute normal vector
vec3 n;
n = gl_Normal + morphWeight0 * normalMorph0+ morphWeight1 * normalMorph1
+ morphWeight2 * normalMorph2
+ morphWeight3 * normalMorph3
+ morphWeight4 * normalMorph4;
normal = normalize(gl_NormalMatrix * n);
// compute morph delta position
vec4 p;
p.xyz = morphWeight0 * coordMorph0
+ morphWeight1 * coordMorph1
+ morphWeight2 * coordMorph2
+ morphWeight3 * coordMorph3
+ morphWeight4 * coordMorph4;
// compute final position value
p.xyz += gl_Vertex.xyz; p.w = 1.0;
gl_Position = gl_ModelViewProjectionMatrix * p;
In fragment shader, per pixel lighinting and material applied.
Another interesting mophing is to use texture uv coordinate to illustrate the idea of texture unwrap. Two dimentional texture image wraps around our 3D head mesh. Since I know whant all number means here, this coordinate value can be customized in vertex shader. I have tested some variation of this by changing morph speed in each vertex such as linear interpolation, delayed start (kind of clipping plane look), delayed sine speed (make it a bit more dynamic). Here is the vertex shader code for this.
// adjusted weight value
float progress = 0.0;
// linear interpolation
if (unWrapMode == 0)progress = unWrapWeight;
else
{// check whether this vtx should move at this moment
// first, find current morphing plane in z direction
float cp = (modelDepth1 - modelDepth0) * unWrapWeight + modelDepth0;
if (p.z < cp)
{// clipping plane line changes: morphing is linear once it starts
float ratio = (p.z - modelDepth0) / (modelDepth1 - modelDepth0);
progress = (unWrapWeight - ratio ) / (1.0 - ratio);
// add some variation with sine wave form
if (unWrapMode == 2)
progress = sin(radians(progress * 90.0));}
}
// normal
n = (1.0 - progress) * n + progress * vec3(0.0,0.0,1.0);
normal = normalize(gl_NormalMatrix * n);
// position
float offset;
vec4 p1;
p1.x = gl_TexCoord[0].s * 48.0; p1.x -= 12.0;
p1.y = gl_TexCoord[0].t * 24.0; p1.y -= 10.5;
p1.z = 9.265;
p1 = p1 * progress + (1.0 - progress) * p;
p = p1;
- More shaders merged at the end...
As I described in earlier section... During implementation, all separate shaders are conversed into one grand piece. i.e. muti-texturing for gender switching (female to male and reverse). Morph coloring (visualize the amound of morphing in color spectrum per vertex). Tiling (compositor - post processing of render buffer)...
- Femail model (this is neutral shape. no textured and textured)
- Phoneme Big Ahh & Smile I
- Smile II and beginning of Fear
- Fear with full weight and Male Face
- Coloring of morphing
- Texture UV UnWrap Morphing I (front view)
- Texture UV UnWrap Morphing II (side view)
- Render To Texture (Frame buffer post processing. Mosaic Tiling effect. Reference - OGRE comoisitor written in nvidia Cg)
- Render To Texture (Frame buffer post processing. Grid like effect)
- Trivial bug takes long long time...
In many cases, examples and tutorials only provide code fragment... but in real, we need lot more complex stuff. Sometimes reallly small piece causes painful debugging. One of example I encountered is the difference of texture indexing between OpenGL and GLSL. I remember there was mention about this in class.... I did not pay that much attention to it since I used to OpenGL before... But it took quite a time to fix the bug in may app.
- Tiny piece of code can do many things
Definitely the most amazing stuff in GPU programming is we can make something really cool with simple coding. This is soooo attractive. Event it requires lots of mathmatical consideration but... its reward is much more than the efforts. Looking forward to CUDAing!
- OpenGL Shading Language, second edition, Addison Wesley
- GPU Gems, Addision Wesley, 2004
- GPU Gems 3, Addison Wesley, 2008
- OpenGL SDK Document, http://www.opengl.org/sdk/docs/
- OpenGL Vertex Buffer Object, http://www.opengl.org/wiki/index.php/GL_ARB_vertex_buffer_object