Você está na página 1de 134

Coffee Cooking Tips Recipes & Food and Drink Wine & Spirits Elder Care Babies &

Toddler Pregnancy Acne Aerobics & Cardio Alternative Medicine Beauty Tips Depression Diabetes Exercise & Fitness Hair Loss Medicine Meditation Muscle Building & Bodybuilding Nutrition Nutritional Supplements Weight Loss Yoga Martial Arts Finding Happiness Inspirational Breast Cancer Mesothelioma & Cancer Fitness Equipment Nutritional Supplements Weight Loss

Affiliate Revenue Blogging, RSS & Feeds Domain Name E-Book E-commerce Email Marketing Ezine Marketing Ezine Publishing Forums & Boards Internet Marketing Online Auction Search Engine Optimization Spam Blocking Streaming Audio & Online Music Traffic Building Video Streaming Web Design Web Development Web Hosting Web Site Promotion Broadband Internet VOIP Computer Hardware Data Recovery & Backup Internet Security Software

Advertising Branding Business Management Business Ethics Careers, Jobs & Employment Customer Service Marketing Networking Network Marketing Pay-Per-Click Advertising Presentation Public Relations Sales Sales Management Sales Telemarketing Sales Training Small Business Strategic Planning Entrepreneur Negotiation Tips Team Building Top Quick Tips Book Marketing Leadership Positive Attitude Tips Goal Setting Innovation Success Time Management Public Speaking Get Organized - Organization

Credit Currency Trading Debt Consolidation Debt Relief Loan Insurance Investing Mortgage Refinance Personal Finance Real Estate Taxes Stocks & Mutual Fund Structured Settlements Leases & Leasing Wealth Building Home Security

Mobile & Cell Phone Video Conferencing Satellite TV Dating Relationships Game Casino & Gambling Humor & Entertainment Music & MP3 Photography Golf Attraction Motorcycle Fashion & Style Crafts & Hobbies Home Improvement Interior Design & Decorating Landscaping & Gardening Pets Marriage & Wedding Holiday Fishing Aviation & Flying Cruising & Sailing Outdoors Vacation Rental

Book Reviews College & University Psychology Science Articles Religion Personal Technology Humanities Language Philosophy Poetry Book Reviews Medicine Coaching Creativity Dealing with Grief & Loss Motivation Spirituality Stress Management Article Writing Writing Political Copywriting Parenting Divorce

The Human Head site celebrated its first anniversary (last month) languishing, unused and aging, on my hard drive. My personal thanks to Wade Acuff and Patrick Miller for making it possible to be bring it back online after its five month hiatus. Unfortunately, I haven't had time to update the site beyond the mini-bio on the overview page and the temporary alternate navigation at the bottom of this page to support linux/mozilla users. The Human Head is now hosted by the Department of Art at Mississippi State University.

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies copyright Andrew Camenisch | 2001

The Human Head site celebrated its first anniversary (last month) languishing, unused and aging, on my hard drive. My personal thanks to Wade Acuff and Patrick Miller for making it possible to be bring it back online after its five month hiatus. Unfortunately, I haven't had time to update the site beyond the mini-bio on the overview page and the temporary alternate navigation at the bottom of this page to support linux/mozilla users. The Human Head is now hosted by the Department of Art at Mississippi State University.

What and Who


The human head is unquestionably one of the most difficult objects to
model; both because of the complexity in the shape of its features and because of its familiarity to the viewer and the resulting low tolerance for error. In addition, the face has a range and subtlety of expression that to our human perception is unmatched in the universe. Clearly, the status of the photoreal human head as the "holy grail" for CG artists is deserved. Though the majority of the most powerful and engaging characters in animation history have been little more than circles and squares, all characters are simply an abstraction of reality. Abstracting someone else's abstraction is the root of cliche. An understanding of reality then is the foundation for new ideas and fresh perspectives and creative character design. In short, study from life! That is the approach this tutorial takes. The "how" of modeling, however, is best shown with timelapse movies or picture tutorials. I have neither the time nor the interest at this time to create a step-bystep, "click here next" tutorial for modeling the head; neither would it contribute to the scores already available on the web and in books. More valuable, I think, is a discussion of theory and process with emphasis on the question "why." Incidentally, a large portion of the information on this site is in text form with illustrations to clarify when needed. As a constantly expanding resource, this site would benefit greatly from user feedback since the meaning of words and phrases are easily confused and, from the author's perspective, difficult to control. This site is not currently anywhere near what it should be to merit its presumptuous / ambitious title; however, for now, I hope it serves well as a starting point for some users and an opportunity for further exploration and discovery for others.

A word about myself: I built this site while pursuing a Master of Fine Arts at Mississippi State University. The site represented a culmination (of sorts) of my personal research in organic modeling undertaken to support my thesis in portraiture and characterization. I graduated in December 2001 and currently live and work in New Zealand under contract with Weta Digital.

000000000000000000000000000

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Parameterizationthe Where and the How


"The most commonly overlooked aspect of a model is its topology." 1|2|3

There is more to modeling a head than capturing its exact


appearance. The perfect shape can be invalidated by an imperfect structure. In my opinion the single most important, but most commonly overlooked, aspect of modeling a 3D CG head (or any complex model built for animation) is its topology. Both topology and geometry are different aspects of a model's parameterization. Whereas geometry designates the placement of vertices, topology refers to the structure of a model; it is concerned less with where vertices are than how they are connected together. Topology affects the economy and the capability of the model. Economy consists of the efficiency and effectiveness of the model. Are there more polygons than absolutely necessary? The capability of the model involves the quality and range of movement it is able to perform. Ultimately, there are two factors that should determine the flow of a model's parameterization: 1) the direction of the topographical (not to be confused with topological) details, and 2) the direction of the surface's motion when animated.

Let's look at the mouth for example:

The characteristic shape of


this region of the face is defined by the lips which meet at the corners of the mouth to form a complete circle.

The parameterization therefore should follow the direction of this topography emanating from the center of the mouth as concentric circles.

Furthermore, from kissing to shouting to smiling, the direction of the motion of the mouth radiates from the center of the mouth, mandating a topology that flows out and away from the center of the mouth.

The resulting topology would look something like this:

Let's compare this to an improper topology that ignores the direction of surface shapes and the direction of motion.

Note that though this surface has a higher resolution than the first surface it inadequately and awkwardly represents the mouth shape primarily in its dealing with the corners of the mouth.

Now take a look at the primary muscles of the human


face.

You'll notice that the flow of their fibers corresponds roughly to the flow of the parameterization in the mouth region. This is an important observation. As you attempt to build more and more subtlety of expression and shape into your character's model you will want to conform your parameterization more and more to the underlying muscles of the face.

1|2|3

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Parameterizationthe Where and the How


Natural Lines
1|2|3

The previous section mentioned the correlation between underlying


muscles and the proper topology of a model. This section introduces two anatomical "phenomena" of the human body that help visualize the influence of the muscles on the skin: wrinkles and Langer's lines.

Wrinkles
Wrinkles on skin are essentially the same as wrinkles on cloth: a bunching of fabric. The fabric draping the head, however, is typically firm and elastic, conforming tightly to the underlying fat, muscle and bone. Nevertheless, a large part of the expression and personality a particular head communicates is dependent on a network of temporary and permanent wrinkles.

Temporary wrinkles are those that that come and go with particular
facial movement. Some of these wrinkles are key to identifying facial expressions. The parameterization of a head model should plan for wrinkles that will be modeled into the blend (morph) shapes when setting up the head for animation.

For Example
The most prominent of wrinkles, the smile/shout wrinkle around the mouth, is commonly overlooked in the modeling of youthful heads, because the crease is typically not visible in the expressionless base shape and the artist simply overlooks the fact that it will be needed later in modeling the target shapes (check out most head modeling tutorials on the web). Thoughtlessly extending the concentric circle parameterization of the mouth region, also contributes to improper structure for defining the mouth wrinkle. (I've been there and done that...)

rollover

Permanent wrinkles represent the history of a head's facial action, forming on the human face after a lifetime of facial movement (and time in the sun). As age increases, the elasticity of skin decreases. The skin becomes less and less able to spring back to its original position, increasing the surface area of the skin and eventually sagging and bunching on the face (more info). The lines formed by these wrinkles illustrate a nearly ideal topology for a CG head model.

I'm not sure who owns the copyright on this picture. If anyone happens to know, please email me.

Self-Portrait with Grey Felt Hat

Van Gogh's intuitive understanding of the natural flow of lines on the face, possibly developed from observing the face's wrinkles of expression and age. The compatibility of Van Gogh's brushwork with Langer's Lines also leads one to wonder if he had been influenced by illustrations of these so-called lines of tension.

Langer's Lines
A stab wound inflicted with an ice pick or a similar weapon with a conical blade will leave a slit in the skin, not a round puncture as might be expected (a bit more info on stab wounds and Langer's Lines). The direction of a slit varies between different areas of the body but remains fairly consistent from person to person. Langer's lines map the direction of slits across the body and are used in surgery to guide incisions: cuts along Langer's Lines heal better leaving less scarring. Named after Austrian anatomist, Carl Ritter von E. Langer (1819-1887), Langer's Lines represent lines of tension within the skin. These lines tend to be oriented parallel to the direction the skin is pulled and are dependent on the direction of collagenous bundles (elastic connective tissue) in the reticular layer of the skin.

Though certainly not the final word on topology, Langer's lines provide
several interesting ideas for the head and rest of the body, as well as offer insights regarding the tension placed on skin in different regions.

Illustrations taken from Henry Gray's Anatomy of the Human Body. In public domain.

1|2|3

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation

resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Parameterizationthe Where and the How


Identifying and resolving problem areas.
1|2|3

Since the factors (the shape and motion of the surface) that
influence topology are independent of each other, there are certain regions on the human head where they demand conflicting structures. One example region is the promontory of the cheekbone.

Bone primarily determines the surface


shape of this region. The cheekbone creates a straight ridge that begins in the base of the skull behind the ear and runs seamlessly into the eye bone which circumnavigates the eye. Presumably, the parameterization to define this shape should flow along the same line.

However, the motion of the skin in this region, is manipulated directly and indirectly by several different muscles which push and pull the skin in a variety of directions -- directions that don't necessarily flow along the topographical (again, don't confuse with topology) lines. Smiling pushes the skin up and back; kissing pulls it down toward the mouth; while squinting pulls it up and in toward the eye. In the illustration, you can see that the movements just described run along lines that are diagonal to the cheekbone.

In summary, the surface's shape requires a horizontal/vertical grid, while the surface's motion demands a diagonal configuration. When you place a diagonal grid over a horizontal grid you end up with something that is incompatible with NURBS and undesirable for Maya subd surfaces: 3 and 5 sided faces. Resolving conflicting parameterization usually entails beginning with the structure demanded by the motion and adjusting it as much as possible to conform with topographical concerns.

Examine the solution for the


cheekbone topology in this wireframe. Also notice the solution for a similar problem on the nose where the direction of surface shapes and the direction of the motion conflict. (Sneering pushes the skin above the wings of the nose up and forward creating very characteristic wrinkles on the nose. This motion runs diagonal to the direction of the most prominent surface shape in that region: the nose itself.)

click for larger version

Suggested workflow:
Make several copies of a drawing/photograph of your character. On one picture, draw lines indicating the direction of major shapes of the surface. On another picture, draw lines indicating the direction of the movement of the surface using the facial muscles as reference. Copy the lines from both pictures onto a third picture and begin connecting the lines that seem to flow into each other to form what Bay Raitt refers to as Edge Loops. All lines must end up roughly parallel or perpendicular to the lines closest to it so that a grid of sorts can be constructed. When surface lines and motion lines don't overlap, aren't parallel and aren't perpendicular -- you've officially found

a problem spot.
Decide what is most important and resolve it the best you

can (remember, with Maya subds, avoid creating faces with more or fewer sides than 4; but if necessary, almost always give preference to a five-side over a three-side. Refer to the Subdivision Modeling Resource Page maintained by Tams Varga, for dealing with odd numbered faces).

1|2|3

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Modeling Paradigms Compared


Common approaches to modeling the head 1|2|3

There a several different common approaches to modeling the human head. Following is a list of these methods with a short description: NURBS single mesh: The head being
spherical in shape, the NURBS sphere is a popular place to start. The sphere can be oriented along any axis: x) poles are located in the ears; I've only seen one head built this way and I'm not sure where... y) poles are placed on the top of the skull and around the neck, see Jeremy Birn's tutorial for example, or z) poles are in the the mouth and on the back of the skull or in the mouth and opening at neck. [see example note that the eyes are separate geometry] One big disadvantage to the single mesh NURBS is that the entire head has to share the same resolution; meaning the model might have a zillion isoparms running around it just to be able to define a decent nose or eye. It is a fast and easy way to model a head, however, and works fairly well for low detail, highly abstracted characters.

single NURBS mesh in x, y and z orientation

NURBS patch model: Modeling the head by stitching together a number of NURBS patches allows for both more complexity in the shape and more regularity and efficiency in the parameterization. This system is fairly complex however and involves a lot of tedious tweaking at the seams of each patch to ensure smooth continuity with adjacent patches. The NURBS patched head is discussed in more detail in the process>NURBS chapter of this website.
multiple NURBS patches

Polygonal mesh: Polygonal head models are


usually single meshes and can be built from scratch (beginning with a sphere or cube primitive) or constructed by converting NURBS surfaces. Polygons offer a much more left brain approach to modeling than NURBS being much more flexible in their parameterization. Polygonal head models are discussed more fully in the process>polys/subds chapter of this website. single polygon mesh

Subdivision Surface: Subdivision surfaces


are generated by converting a polygon mesh, offering the polygon head further refinement and infinite smoothness.

single subdivision surface

1|2|3

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Modeling Paradigms Compared


The Pros and Cons
1|2|3

Each modeling paradigm has its pros and cons, and the smart modeler will know when to use each one. Following is a chart comparing some of the pros and cons of each paradigm. Though to my knowledge the chart is fairly software independent, I did include factors specific to Maya.

NURBS:

Polygons:

SubDivision Surfaces: pros:


(being based on polys, all the pros of polys except fur support apply to subds) hierarchical modeling: offers several levels of control: coarse, for general manipulation of surface and refined, for specific manipulation (Maya) scalable resolution

pros:
automatic organization: all vertices are kept in consistent, quadrilateral relationships allowing for built-in UVs which makes applying a texture faster. scalable resolution: surface is defined by mathematical equations and therefore is infinitely smooth, allowing for infinite scaling small file size curves on surface: allowing for Paint Effects (Maya) fur support (Maya)

pros:
single mesh: most models can be constructed as one solid mesh local detail: resolution can be added locally arbitrary topology: how the vertices are connected can easily be rearranged and edited import/export between other packages: poly file formats like .dxf and .obj are supported by practically every commercial 3D software texture control; power to manipulate placement of each individual UV

fur support (Maya)

cons:
limited complexity in the shape of the surface without resorting to multiple patches, in which case the following point applies... can be complicated and tedious seams very little control over texture stretching and placement

cons:
large file size fixed resolution texture control: the control polys offer in terms of setting up UVs comes at the expense of ease; applying a texture can be a fairly involved process no curves on surface: and hence no Paint Effects support (Maya)

cons:
no fur or PE support (Maya) no soft body dynamics (Maya)

1|2|3

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Modeling Paradigms Compared


The Recommended Workflow: NURBS to poly to subd
1|2|3

This site recommends constructing the head with


NURBS patches, converting to polygons and then to subdivision surfaces to create the final model. Certainly, beginning and ending with polys/subds, however, is the chosen route for many experienced modelers. To a large degree, determining a work flow is based on personal preference. I might caution however, that very often student head models that are begun and completed in polygons end up a garbled mess since the actual constructing of a poly mesh demands relatively little discipline and forethought. The head is such a complex shape that the structure must be under control at all times. The organization built into NURBS surfaces helps tremendously. Other disadvantages to this proposed process include: 1) extra time is spent on converting to polys and merging separate patches, (time saved elsewhere, in my opinion). 2) NURBS control vertices do not lie on the surface, sometimes resulting in a tangled web of hulls especially around the lips and making vertex selection in that area difficult.

"The convenience and global power of NURBS modeling tools and the organization inherent to NURBS topology suggest that NURBS are the ideal modeling paradigm for setting up the initial structure of the head model."

However, there are several compelling reasons for starting with NURBS:
1) NURBS automatically maintain a quadrilateral relationship between surface points ensuring proper structure for later conversion to subdivision surfaces (quad faces are preferred by Maya's subds and are more easily organized); 2) NURBS allow for hull selection and pick walking (using the arrow keys to move to adjacent CVs; very useful when the serial relationship between two points is not clear); 3) NURBS can be quickly rebuilt in U and V directions separately (offering versatility while allowing for a quick roughing out of the shape and the progressive refinement of that shape). In summary, the convenience and global power of NURBS modeling tools and the organization inherent to NURBS topology suggest that NURBS are the ideal modeling paradigm for setting up the initial structure of the head model.

1|2|3

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Preparing to Model
Downloading reference material or creating your own

The only real purpose of importing reference photos or drawings


into the front and side orthographic views is to help the modeler maintain proportions, allowing him to move at a quicker pace. However, if these drawings and photos are poorly done, they can cause much frustration for the modeler and sometimes produce a badly distorted model.

Some common sense about taking reference photos:

Stand as far away from the subject as your lens will allow. This will flatten the head, eliminating much of the perspective in the photo, preparing it for its destination: the orthographic view. A lot of distortion in a model can result from using a perspective-filled photo as reference in an orthographic view! Use lighting that is clear but also accentuates the forms of the face. Make sure your light setup works for both the front and profile view of the head (see next point). Don't use a flash! Don't rotate the subject in front of the camera, but instead orbit the camera around the subject. The subject should remain stationary; this will ensure that shadows on the face are consistent between the front and profile views, which makes identifying and aligning features between the two views much easier. Be aware of the colors in your photo. Remember you will be modeling on top of these photos in wireframe view. What color is the wireframe? When selected? It may be helpful to process the color in Photoshop (decrease saturation/value, colorize, etc.) if the modeling components are getting lost in the photo. Make sure you get a TRUE profile. A slight turn of the head can throw off your proportions. Make sure the up and down tilt of the head is consistent between front and profile views. (The forward tilt of the head in the Natalie Portman "front view" below required her profile to be rotated slightly.) You may find it helpful to draw points or a grid on the subject's face to streamline the surface construction process later on the computer. This may be obvious, but I find it easier to align the two photos in Photoshop and cropping them to identical sizes before importing them into the 3d package, instead of aligning them in the 3d package.

For your convenience, here are photos to be used as onscreen reference that I either took myself or found on the web.

Notice in the front view that half the face was mirrored in Photoshop. One reason for doing this is to make sure the face is oriented at a 90 degree angle and not leaning over slightly. The lighting is identical in both shots so shadows and highlights are found at the exact same spots on both faces.

Notice the profile view is NOT a true profile. The head is slightly angled away. Students who have used these images struggled against placing the eyes on their models too far forward. Also, since these photos were taken under different lighting conditions, locating the same point on her face in both photos can be difficult. Images from a Natalie Portman fan site. Copyright owner unknown to author -will remove images if requested.

These images are from the research being done in face cloning at MIRALab. Though I am not yet permitted to display the hi-res versions of the photos, they remain an example of good reference. If you look at the vegetation in the background and the lighting on the man's face, you'll notice the subject stood still as the camera moved around him.

The Anatomy Resource References for 3D artists has a few more profile/frontal photos

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Patchwork Strategies
Comparing NURBS patch designs 1|2

For a brief history and slightly technical overview of NURBS surfaces click here.

There are a number of different ways the NURBS patchwork can


be designed. Three guiding principles to keep in mind as we explore these various schemes are: 1) the number of patches should be kept to the absolute minimum; this not only minimizes the number of seams on the model (reducing compute time and sustaining the modeler's emotional health) but will also simplify texturing. 2) the arrangement of the patches should observe the proper flow of topology as much as possible; frankly, it is impossible to build a topology with NURBS that aligns perfectly to the underlying muscles, but some patch schemes do a better job of approximating it than others. 3) seams and junctures should be placed in areas that are broad in shape and motion. Seams on the forehead and cheeks for instance receive less stress than one around the wing of the nose. (For more information about these principles and for other things to consider, read Tom Capizzi's page on network strategies found in his Head Surfacing Tutorial.) Comparing and evaluating various patch schemes that have been developed will serve to illustrate the previous principles.

The Bingo Setup. Developed by AliasWavefront for the short film Bingo and featured in AW's training videos, this model uses 10 patches to define the face (9 depending on the eye setup). The two most significant problems with this setup in my opinion are:
1) The isoparms flowing from the top of the wing of the nose flow back and around the cheek terminating in the jaw. Consequently, the crease in the cheek that occurs when sneering or smiling can only be faked. Instead, the isoparms should flow along the surface detail whose shape actualy flows from the top of the nose lobe, down around the corner of the mouth and into the chin. 2) The juncture point just to the right of the eye is placed at a point on the face that receives alot of deformation which can cause some tangency problems. <<<model available for free download at AliasWavefront.

The Arnold Setup. This is the head of an entire NURBS


body built by Jeffrey Wilson early in his career and is an amazing example of seamless stitching. The head, however, has the same faults as the Bingo setup while also using more patches to define the face region: a total of 11. The high resolution of the model, in my opinion, is not justified except for possibly in defining the shape of the nose. One advantage of having a separate patch for the nose though, is the ability to increase resolution locally without affecting surrounding patches; this model does not take advantage of that. In my opinion, the geometry of the nose on a NURBS model in most cases is best suggested, receiving further refinement with color and bump texture maps. <<<model by Jeffrey Ian Wilson available for free download at Zoorender.

My Setup. After experimenting with the Bingo setup and applying what I'd learned about stitching and patchwork strategies, I designed a setup that I believe to be the best for a medium resolution human head. The face is composed of 6 patches whose topologies flow in the direction of the major muscles of the face. The topology is far from perfect, obviously, since NURBS technology is limited. However, the model is built to naturally receive (without forcing and faking) the major deformations of facial action.
Except for the juncture on the eyebrow, all seams and junctures have been placed in areas that receive either very little deformation or else very broad deformation. Click here for another view of this model showing the strategy for the entire head.

A Setup to Avoid.This final head is a design used by a company to demonstrate software that creates NURBS surfaces from polygonal meshes. I found it curious due to the number of patches used to construct the face (around 30) and think it's a good example of how patchwork design can get out of hand if the three principles aforementioned are not kept in mind. :)

Designing the patchwork for


a NURBS model is nothing more than a fun puzzle. The following section discusses some of the rules of the game as it explains the strategy for modeling a realistic NURBS ear.

1|2

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Patchwork Strategies
The NURBS ear and the rules of stitching
1|2

click for larger image

The ear is like a face within a face. The complexity, variation and
uniqueness of each ear is nothing short of amazing. Unfortunately many CG heads fail at depicting a believable, realistic ear. I think the main reason for this is simply a lack of good reference (it's very difficult to examine one's own ears in a mirror) and lack of priority (rarely do we notice ears; is it worth the time to do it right?). The ear modeled in NURBS is further complicated by the limitation inherent to that paradigm in depicting complex shapes. Stitching multiple patches, however, offers a solid way of depicting human ears. Before discussing patchwork strategies for the ear, we need to go over a few guidelines for stitching. According to the recommended workflow of this site, there is no point in creating a refined seamless NURBS head since all seams will be dealt with after the patches are converted to polygons. However, stitching can be helpful in general NURBS modeling and is still considered a commercially valuable skill. Following is a list of rules governing master and slave stitching.

Never use the 2nd to last or first CVs to define detail on a patch as these CVs are used to achieve tangency between patches

Place breaks betweeen patches at areas that recive the least deformation. ie. DO NOT design patches to break at natural creases in the face [placing seams at flat areas if possible]

Work out which edges will be masters or slaves before beginning modeling. To determine which is the slave and which the master use the following rules/guidelines: 1) any edge that is to be stitched to two edges must be a master, or 2) if rule #1 doesn't apply, choose as master the patch that represents the point(s) of insertion of muscles whose effect is shared by adjacent patches. In other words, enslave the patches that move very little to the ones that move a lot. The insertion of a muscle is where the muscle connects to the skin. The origin of a muscle is where the muscle is grounded, usually connected to bone. When the muscle contracts it pulls the insertion toward the origin. For example, the muscle that pulls the brow up in surprise is grounded in the upper forehead but is inserted at the brow. The brow consequently receives the primary deformation in the contracting of that muscle. Therefore the forehead should be enslaved to the brow.

The order of stitching is very important and should be worked out on paper before proceeding: 1) always stitch patches first which are masters at every edge 2) for all other patches, observe this principle: always stitch all the enslaved edges of a patch before stitching the master edge(s) of that patch. This will ensure that all corners meet at the proper place without overlapping or dragging.

Align isoparms of patches to achieve a more predictable and secure stitch and to aid congruency in texture maps.

When fixing problem areas (tangled CVs, etc.) adjust the border and tangent CVs on the master surface first. The tangent CVs kind of see-saw around the border CVs.

Now back to the ear:


Ear models have been attempted with single meshes but necessarily fail due to the interlocking flow of the prominent topographical detail of the ear. Notice that the green and red lines pivot around different points of the ear as well as flow into each other.

Whenever the topography of a surface flows in multiple directions, multiple patches are required to adequately define the shape. Assigning one surface to each line is the first step to designing a patchwork for the ear.

Since there is no way to stitch the bottom part of these two patches with the current setup, the middle patch will be broken into two.

Now each edge at the circled juncture has a corresponding edge to which it can be stitched. However, the resolution that the inner hook of the outer surface requires (both to define the shape of that area and to stitch with the two inner patches) is too high to be spread across the entire surface. Therefore the outer surface will also be split allowing for a concentration of resolution where it is needed.

Finally a master/slave system is worked out based on the above rules and guidelines. Notice that one edge of the inner patch is not stitched at all but instead is hidden under the cleft of the outer patch.

Once the design is completed, the ear can be modeled.

click for larger image

Notice that this ear setup can be either blended or stitched to a NURBS head.

Below are several MEL scripts passed along to me by Jeffrey Wilson that are useful for NURBS stitching in Maya. tangentCVWin.mel by JS hullTangencyWin.mel by Jeffrey Wilson mAlignCvs.mel by Matthew Gidney mMidPointCvs.mel by Matthew Gidney mPlanar Cvs.mel by Matthew Gidney pickVertexC.mel by Matthew Gidney pickvertex.mel by Becky Chow

1|2

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

The Relationship
How Polgons and Subdivision Surfaces "Get Along" Polygonal modeling is pretty straightforward though it
requires a lot of discipline to keep organized and efficient. How polygons work in conjunction with subdivision surfaces, however, is a bit more involved. Most significantly, there are three things to be aware of in the way a subdivision surface works with its polygonal control mesh--the poly proxy.

1.

The topology of a poly proxy influences the final shape of the subdivision surface. Placing your mouse over the illustration above will reveal that all three blobs have identical geometry (same number of vertices with exactly the same placement) but unique topologies (vertices are connected in different ways). For the most part, predictable smoothing during animation will require maintaining quad faces on the poly proxy. This thread from the Digital Sculpting Forum ponders the potential of five-sided faces.

2.

Subdivision surfaces expect regular spacing between edges. For example, the shape of the two objects above appear identical. However, placing your mouse over the image reveals that the extra edge around the middle of the second shape creates a bump on the subdivision surface. John Feather explains why, as well as elaborating on problem #1, in this thread from the Mirai Bulletin Board. The practical implication of this problem is that fine wrinkles on the face that appear and disappear cannot be built into the model unless the surrounding resolution is equally high.

3.
This final point is specific to the hierarchical system of Maya's subdivision surfaces. Maya allows for manipulation of subdivision surfaces at various levels of detail enabling general and local control over the geometry of the model. I personally have not found a use for this technology in modeling the human head. The main reason being that modeling done at different levels of detail can react unpredictably if the topology is later altered in poly proxy mode. I prefer modeling detail directly into the polygon mesh and then using clusters and other deformers when general control is need.

Maya Tip: Because subdivision surfaces can slow down interactivity while modeling, I setup my workspace with two perspective modeling views one showing only the poly proxy and the other displaying only the subd surfaces (use the Show menu of each view panel). To view the poly control mesh in shaded view, apply a material to it, but be sure to turn off its "Primary Visibility", etc. attributes in the Render Stats section of its Attribute Editor so that it won't render. Most modeling is then accomplished while the poly view is maximized; a quick tap of the space bar allows me to check progress on the subd surface.

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Attacking Stretching
Mapping UVs 1|2|3
Stretching is a common problem in texturing complex 3D models and can be particularly frustrating on the human head. There are essentially two fronts where stretching can be attacked: 1) setting up UVs and 2) painting the texture. Part 1 of "Attacking Stretching" looks at mapping UVs; Part 2 at tweaking UVs; and Part 3 at painting textures with corrective distortion. Below is a close-up by Michael Koch depicting very nice texture mapping around a potentially tricky area of the face.

part 1

Michael Koch

A UV is a mapping coordinate that determines the relationship between the pixel of a texture and its relative
position on a surface. Unlike NURBS, which have "built-in", uneditable UVs, polygonal surfaces offer the modeler control over the UV setup.

Problems and Solutions


Generating and setting up mapping coordinates for the polygonal head model can be very tedious, given the caverns of the nostrils, mouth, and ears and the mountainous topography of the nose, brow, and ears. Furthermore, the head must contend with the problem of mapping a 2D image to a 3D "spherical" object, which is puzzle that has plagued cartographers ever since the debunking of the flat earth theory. The Great Globe Gallery offers a look at various mapping solutions. The method most commonly applied to mapping the head is a modified Mercator Projection. The Mercator map offers several advantages for the 3D artist: 1) It fits into the rectangular shape required by image file formats, 2) It is whole--no divisions in mapping coordinates, minimizing seams, (see Lundgren's multi-planar solution for comparison) and 3) It is conformal. Conformal mapping preserves the angles of the features. For example, if you paint an eyebrow, scar, wrinkle, etc. at a certain angle on the texure, the feature will retain this angle once mapped onto the object (For comparison, imagine the variation in the actual route delineated, if a straight line were drawn connecting L.A. and

Rio de Janeiro on the following maps: Gnomonic, Globe, and Mercator).

Methods
Spherical, cylindrical, cubic, and planar mapping are different approaches to the same goal: generating mapping coordinates for a surface. Determining which to use in a given situation depends on which will generate a map that is closest to the final goal, requiring the least amount of tweaking. Spherical mapping, in my opinion, doesn't deal with the caverns and mountains of the head as predictably as cylindrical mapping does and therefore requires more adjustment to approximate a modified Mercator projection. Cylindrical mapping, though, doesn't generate proportional UVs in some areas; for instance, compressing the top of the skull.

spherical

cylindrical

Simply mapping the model, however, generates a UV map with many overlapping UVs in the mouth, ears, and nose regions. As a solution, the Maya Rendering Courseware suggests generating UVs for a human head model using the following process: 1) duplicate the head 2) average vertices on duplicate model 3) spherical wrap 4) transfer UV set from duplicate to original Averaging vertices is a way of "smoothing out" the model, flattening raised areas and raising sunken areas by averaging the distance between vertices. This, of course, makes the head much more ball-like and reduces the amount of overlapping in the critical regions.

spherical

cylindrical

There are a few problems, however, with this general "smoothing out." Averaging vertices modifies the relative placement of vertices, distributing them more evenly across the model. It is important, as we'll see later, that the relative size and shape of each face of the poly mesh remain consistent between the UV set and the actual model--as much as possible. A solution, therefore, is to use averaging vertices selectively and locally on the ear, nose, mouth, brow, and chin/neck regions.

spherical

cylindrical

Maya offers an iterative action called "relax UVs" that performs the same smoothing effect as "average vertices", but on UVs. Mapping the original head model and then applying "relax UVs" on the mouth, nose, ears, etc. can result in a UV map identical to one generated by the "duplicate/average/transfer" method. In conclusion, whether you map the raw model and relax UVs, or you duplicate the model, average vertices, map it and then transfer the UVs--it's completely up to you. : )

1|2|3

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Attacking Stretching
Tweaking UVs
1|2|3

part 2

The best UV projection shape would be one that


assumed the exact form of the model (for similar ideas see YouMap and this mel script: sds ezUV v1.1). But for a surface as subtlely complex as the human head, there is simply no way of getting around the sometimes tedious task of repositioning-tweaking, if you will--the mapping coordinates of the model to reduce stretching of the texture.

To visualize the stretching, apply either a checker


texture or a number pattern (Util-Mark 6) to the surface. The regularity and uniformity of these textures make any distortion readily noticeable. After mapping one of these textures to the model, tweak the UVs until: 1) all mapped checkers/squares are approximately the same size on the model, and rollover

2) all mapped checkers/squares are, for the most part, still squarish on the model (no rectangles or diamonds).

rollover

Achieving this uniformity and squarishness in the


texture will require the individual repositioning of certain UVs. Set up the software's interface so that the model and the UV map are side by side. Referring to the wireframe of the model, adjust the position of the UVs based on the following two principles: 1) the relative size of each polygonal face must be maintained If the vertices on the model define a face that is twice the size of another, the UVs at those vertices should demonstrate that same relative size. Notice below how the texture on the model straightens out when the UVs adopt a placement that mimics the relative distance between vertices on the model. rollover

2) the basic shape of each polygonal face must be maintained. If the vertices on the model define a face that is square, the UVs at those vertices should also define a square. Notice below that the texture straightens out when the shapes the UVs define mimic those defined by the model's vertices.

rollover

Though some stretching will remain, a lot of it can


be hidden by moving it to: 1) relatively insignificant areas. Obviously, the face is the main concern. If the head has hair, stretching beyond the hairline will work fine. Behind the ear, inside the mouth, under the jaw, etc. are regions that are typically in shadow or simply not in one's normal line of sight. 2) relatively smooth areas. Any stretching in a solid color texture is invisible. Similarly, stretching in regions of the head with very little bump depth or color variation is much less noticeable. To my knowledge, the under part of the nose, the eyelid, and possibly the under part of the brow are the smoothest regions on the face, in terms of color and bump textures.

Maya offers a fairly powerful toolset for editing UVs. The following tutorials show these tools in action: Baskin's Subd, Padron's Ugly, UnderTow's Lowpoly, and Kapp's Chicken.

1|2|3

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Attacking Stretching
Painting textures with corrective distortion
1|2|3

part 3

Left head is textured procedurally. Right head is textured with an image map based on the procedural texture to the left.

Another way to deal with stretching textures on a


head model is to use procedural textures, converting them to file textures and compositing them in an image editor that supports layering and masking, such as Photoshop. Converting to file analyzes the model, its UV map, and the projection of the procedural texture and generates a 2D image that maintains the appearance of the texture on the model. To maintain the appearance, the 2D image is generated with corrective distortion. Look at the following example: In Maya, the 3D procedural texture, leather, was applied to a polygonal sphere. Notice that the texture is uniform and shows no pinching at the pole of the sphere.

Converting to file texture generated the following 2D image. Notice the stretching of the texture near the

poles to accommodate the pinching of the UVs in those areas. This is corrective distortion. rollover to view UV map of sphere

Mapping this 2D texture back onto the sphere (and turning off texture filtering) resulted in an image identical to the one above featuring the procedural granite texture.

The image at the top of the page, shows this


process transferred to skin texture. The head on the left is textured procedurally, (layering granites and leathers, etc as pimples, freckles, etc). The head on the right is mapped with images based off of 2D conversions of the procedural textures.

The Process
Set up 3D procedural textures that fairly convincingly depict different aspects of the skin surface: pimples, moles, glyphics, discolorations, stubble, freckles, dry skin, etc. Convert each of these to an image file. Notice the corrective distortion in the examples below (based on this imperfect UV map). freckles pimples

glyphics

light stubble

Paste these images into a layered image in

Photoshop and proceed to mask out each layer where needed (for instance, mask out most of the "freckles" layer around the neck, forehead, and chin regions, localizing freckles to the nose and cheeks). After color correction and additional painting a decent texture map with built-in corrective distortion is created.

This process is a relatively easy way of creating good skin textures that deal with stretching on the model. However, for high-end close-up work there is no avoiding significant hand painting and possibly the limited use of photography. Nevertheless, 2D images converted from 3D procedural textures can serve, if nothing else, as reference by identifying stretching and proposing corrective distortion.

Achieving the material qualities of skin is one of the most difficult aspects of human head texturing. The translucency , and subsurface light scattering properties, and the softening effect of the carpet of "peachfuzz" covering the skin are difficult qualities to achieve using conventional shaders. Tweaking the shader for the unique lighting conditions of different shots may be the most practical solution to achieving a consistent fleshy look. One of the most useful tools available for an artist to study the properties of skin (by viewing it under various lighting conditions) is the Facial Reflectance Field Demo. Excellent reference for lighting setup, too.
Be sure to read Steven Stahlberg's comments on human skin and shaders in chapters 17 - 19 of his head/face tutorial (info also found on his site). Alias|Wavefront's skin shader plugin for Maya offers further information. Peter Levius is doing some promising work using this plugin. Seryong Kim has done some interesting stuff with Mark Davies' raydiffuse plugin for Maya.

1|2|3

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links

gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Modeling Blendshapes
A few things to consider 1|2

Work illustrating the fleshiness so often missing in facial morph targets.

Mark Piretti

David Maas

The real test of a head


modeler's skill, in my opinion, is in the setting up of the blendshapes. It reveals not only problems in the structure of his model but also the level of his understanding of the structure of the head. The same intuitive familiarity we all have with the shape of the facial features we also have with their motion, qualifying each of us to utter the "something's-just-notquite-right" critique when something is off. And yet our visual understanding of what really happens in facial action is about as complex as the cartoons that grace our Sunday newspapers -- literally. The exaggeration in comics and the depictions of the same head in various poses from panel to panel is probably the most influential education the artist receives on the topic of facial expressions.

Though comics artists are masters of abstracting facial poses to a few expressive lines, their drawings focus on the major shapes created in a pose while often consciously editing out what is going on around them. While a 3D modeler must also prioritize the depiction of the major shapes, he must convincingly deal with the stretching, creasing, bulging, etc. that the rest of the face undergoes to accommodate these shapes. That is where the average artist's knowledge breaks down and is often found relying on cliche and imagination.

(Read Scott McCloud's Understanding Comics for a compelling argument for the power of "cartoony" characters; ie. it's not just kid's stuff.)

Two things commonly overlooked in modeling blendshapes:


1) the effect of muscles on surrounding skin 2) the underlying bone; skin should glide over a solid substructure

Examples:
The eyebrows should glide over the eyebone as muscles pull them up in surprise. Notice the downward pull on the hairline and compare this to Square's demonstration of the forehead wrinkle setup in Final Fantasy (Production>CG Animation>8).

click for QTVR Sometimes, however, it's the other way around. Notice in this QTVR how the skin does not cling to the jaw as the mouth opens but instead the jaw passes through the skin. A commonly overlooked action. Also notice the reach of this motion, affecting skin as high as the cheekbones and as low as the Adam's apple. click for QTVR

Guidelines for modeling blendshapes: 1) Analyze facial expressions in motion; look for a general line in motion--it is not sufficient to use static images for reference when modeling blendshapes. Every point on the skin of a real head follows a certain path and covers a definite distance between the neutral pose and an expressive one. Each vertex of your head model should seek to correlate as closely as possible to the movement of the point on the head that it represents. You'll find, after staring at a point on your own face as you make an expression over and over again, that the movement of that point can be abstracted into a line. What is the angle of the line? Does it curve in or out? how much? Also, be sure to watch the movement of your face in the mirror from different angles to determine the line of movement in each axis (this is why even video reference is inadequate). I guarantee your face will get a work out if you don't cut corners on analytical observation.
Tip: when looking in the mirror, make use of pimples, moles and pockmarks to analyze motion of skin. The messier the face, the better. :) Case study: Smiling, one of the broadest movements of the face, pulls the muzzle out, up, and back, tucking it up under the cheeks which rise slightly in the y axis but also push out in the z and x. Abstracted, the line of motion is like a sweep that pivots around the middle of the nose (notice how the movement of the skin fades out as it approaches the

nose). sorry for the text, illustration coming soon...

2) Observe the physical properties of skin as a flexible, elastic matrix that


is sculpted with fingers of muscle. If a bulge grows somewhere, there must be a depression somewhere else -- pulling tightens the skin. The stretch of facial skin does not need to follow the laws of squash and stretch for animation; that is, the volume needn't remain constant. (Fill your cheeks with air: you've just increased the volume of your oral cavity.) Obviously skin is elastic. However, note that: 1) common expressions actually put very little stress on the skin's elasticity, and 2) absolutely no facial action takes skin to the limits of its elasticity. (Open your mouth as far as possible. You will feel stress at the corners of your mouth, but use your finger to lightly poke them. You'll notice there is elasticity to spare.) These two observations have the following practical implications: 1) Try to maintain the relative distance between vertices as much as possible as you form the expressive pose. It may help to think of facial expressions (and blendshapes) as simply repositioning the skin surface rather than stretching it (for instance, when you fold your lips into your mouth, the bottom of the nose is click for QTVR pulled down quite a bit. However, very little stretching is taking place; the nose accomodates for this simple repositioning of the skin by sinking in the tip of the nose while pushing out the septum. 2) Regardless of the facial pose, the skin should always appear flexible. Conveying the elasticity and suppleness of skin is possibly the greatest challenge to modeling good blendshapes. For example, some head models click for QTVR mysteriously grow extra skin around the mouth when the jaw is opened instead of recognizing the "unfolding" of skin at the corner of the mouth that is bunched up when the mouth is closed..

3) Create blendshapes of facial poses at their most extreme (don't just


open the mouth . . . OPEN THE MOUTH!). In some cases, this doesn't work well, and an inbetween blendshape is needed. The opening mouth, for instance, goes through various shapes before reaching its extreme pose. Notice in this QTVR that the corner of the mouth doesn't move linearly from its base position to its final position. Both its speed and position should be represented as curves over time: the corner starts slowly and gradually accelerates, and it is first dragged in-and-down click for QTVR and then just down.

4) Test blendshapes by:


1) typing in a value over "1", exaggerating the deformation to analyze the relative movement of the vertices 2) moving the blendshape slider up and down to see whether the skin glides over underlying bone, or if the bone appears to collapse underneath (you can only really detect this when viewing the blend in motion) By the way, you should definitely make your blendshape a target of the base mesh early in the modeling stage, so that this kind of testing can be done at each stage of its modeling.

5) Be careful to isolate muscles correctly. A large smile, for instance,


should be the result of a combination of blendshapes: one pulling the corners of the mouth up and one pulling the lower eyelid up. Don't model expressions (smile, frown, etc.); model the effect of specific muscles.

6) Be aware of points on the face where the skin appears anchored or resists movement. Expressive wrinkles form when moving skin meets skin that doesn't want to move. (corner of the eyes, upper nose).

7) Model and place the teeth in the skull before attempting to model
blendshapes for mouth movement (obviously, the same thing goes for the eyes). The teeth, as part of the skull, serve to push the mouth out. Smiling pulls the lips back around the teeth tightening the skin in that region. A toothless smile is still obviously toothless even if the mouth is kept closed because of the missing support of the substructure!

Practical Tips (primarily for Maya users):


when modeling expressions that are mirrored for each side of the face be careful with the center row of vertices as this receives double deformation when both blends are activated when trying to select vertices around the mouth in wireframe view, look at the mouth from inside out so vertices don't get lost in rest of the head; or turn on backface culling, or turn off "double sided" in the model's attribute editor, or select faces of the mouth region and Show>Isolate Select>View Selected. use the "f" key to focus on selected components -- makes it much easier to determine placement of adjacent vertices when translating vertices, use the yellow box of translation gizmo (instead of the individual arrows) and constantly tumble around model, switching between wireframe and shaded views; keep tweaking until it looks right from every angle (good advice from Bay Raitt at spiraloid. com).

1|2

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Blendshapes and Misc.


Considerations in setting up a model for facial action
1|2

Paul Ekman has developed a system of codifying facial expressions based on the muscles used in their execution. FACS (Facial Action Coding System) assigns numbers to the muscles relevant to facial action creating an objective way of recording and analyzing facial expressions independent of individual subjective perception. The artist, actually, is more interested in the subjective perception of facial information but still has much to gain from the work of Ekman and others who are building on research by Duchene and Darwin in the 19th century. FACS identifies 46 Action Units, sometimes combining more than one muscle per unit. In deciding which blendshapes and controls to set up, I narrowed down the action of facial muscles to the following list:

Major Blendshapes
feature muscle

[total: 29]

description left right top bottom surprise worry anger close/ open side squeeze and squint sneer x x x x x x x x

eyebrow frontalis eyebrow frontalis eyebrow corrugator eye levator palpebrae orbicularis oculi elevator labii superioris zygomatic major

eye

nose/ mouth mouth mouth

smile

x x

x x

triangularis frown

mouth mouth mouth mouth

risorius/ platysma orbicularis oris orbicularis oris orbicularis oris depressor labii inferioris

crying pucker mmm... protrude bottom lip down open open

x x

x x x x x x

mouth mouth jaw

Major Wrinkles
feature forehead eyebrow eye eye nose chin mouth description surprise furrowed crow's feet smile under eye sneer cry/frown pucker

[total: 12] left x x x x x right x x x x x

Misc
feature description

[total: 5] setup cluster with membership on jaw, chin and bottom lip (top lip receives some deformation at corners of mouth and from friction with bottom lip); remember to parent bottom teeth to cluster cluster cluster

jaw

slide

jaw nose

tense flare

cluster with membership that fades out at corners of eye and with pivot point located in eyelids sheath for eyeball center of eyeball; rotation of eyeball drives rotation of cluster throat cheek swallowing puffing/sucking sculpt deformers clusters

Much more can be said about setting up the human head for animation; the eye for instance should be a chapter in itself. I do plan on adding to this website as time permits. There is, however, a limit to what can be done with off the shelf software that hasn't been customized for subtle aspects of character animation. Animation Artist has an article mentioning some of the software that has been written to handle the subtleties of facial action in recent CG films.

1|2

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Just Pictures
A small image and QTVR library of the facial features. Sorry for bad lighting and blur on some of these; I hope to redo them when I have more time. If anyone has quality photos and QTVR/movies they would like to share, please post them somewhere and I will link to them.

The Head

The Eye

The Mouth/Nose

The Ear

Peter Levius has compiled further anatomical reference resources at http://www.fineart.sk.

Media & Websites


An unexhaustive list of useful media and links to character artists, character modeling tutorials and other relevant information in no particular order and with no particular endorsement. If you would like me to add your link or someone else's to this list, email me and I'll consider your site.

books
Gray's Anatomy: The Classic Collector's Edition by Henry Gray o The Making of Final Fantasy: The Spirits Within by Steven Kent The Expression of the Emotions in Man and Animals by Charles Darwin

The Artist's Complete Guide to Facial Expression by Gary Faigin

1000 on 42nd Street by Neil Selkirk

dvds
Final Fantasy The Spirits Within Square Pictures o

The Human Face BBC

Foooooo

downloads
3Q scanned head -- .dxf of myself [294 KB] 3 Maya NURBS bodies -early models by Jeffrey Ian Wilson Lightwave body parts -by Pierre-Marie Albert Maya NURBS patched head -- by Petre Gheorghian Poly body -- by Richard Suchy

Maya single-mesh NURBS head -- by Frank Belardo

> Crossroads 3D -- file conversion program <

visual reference
The AR Face Database Hair Boutique -- with gallery Wig Styles HiRes pictures of public officials -- great skin ref. The Figure on CD -- very few heads; Jeremy Birn's review of the CD Facial Expressions -scroll down for pictures head and skull -- the Visible Woman Project

Facial Reflectance Field Demo

Teeth

Anatomy Resource References

Eyes -- more than you want to see...

people
Andrew Camenisch Alceu Baptisto John Feather Seryong Kim Dylan Gottlieb Michel Roger Pierre-Marie Albert Miles Estes Hikaru K. Hyung jun Julien Leveugle Olli Sorjonen Ken Brilliant Malcolm Thain Igor Posavec Nakajima Michael Koch Marco Patrito Amaan Akram Stuart Aitken Tibor Madjar Frank Silas Ulf Lungdren Jean Marc ARIU Andrey Purtov Albert Susantio Dana H. Dorian Michael Sormann Hou Soon Ming Matthew Clark Jeremy A. Engleman Pascal Blanche Francois Rimasson David Lightbown Daniele Duri Sam Gebhardt Taron Bill Fleming Bay Raitt Arild Wiro Anfinnsen Tom Capizzi Nicolas DuThatCo Ren Morel Mauro Baldissera Caleb "Cro" Owens Jeremy Birn e-frontier (commercial) Steven Stahlberg Visen Brnicevic Robert Kuczera Richard Suchy Ryan Duncan Virtual Celebrity (commercial)

Eric Sanford

Chappy Peter Levius

Giovanni Nakpil

tutorials
highend3d's maya tuts AW's tut on blendshapes Susantio's subd tut pandora Hannon's berNURBS tut virtual mime Jeremy Birns' NURBS tut Steven Stahlberg's NURBS tut (free download) Ron Lemen's Constructing the Head on Paper Minako's paintfx hair David K.'s poly tut Nakajima's MetaNURBS tut for body Carsten Lind's NURBS tut Eric Sanford's Frankenstein Richard Natzke's ear

information
Digital Sculpting Forum Lavater's Essays on Physiognomy Understanding Facial Expressions Subdivision Modeling Resource Gamasutra article on facial animation BBC's The Human Face Facial Animation / CRL research Facial Action Coding System The Art of Poly Panther Calzone's resource Digital Sculpture Techniques Surface Anatomy of Head and Neck Gray's Anatomy Online

broken links
The Virtual Character Project Peter Ratner's subd tut Dirk Bialluch's subd tut Veli-Antti Rautiola's subd tut Mark Strohbehn complete skull QTVR Surfaces and Renderings Bill Stahl's NURBS tut Facial Perception Basic Portrait Lighting

this site featured on:


CGChannel Flay InsideCG 3DFly 3DTotal ChinaVFX

Maxunderground

ROM3D

theory overview modeling theory approaches process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

Showcase
Human Heads of the Web (and one monkey)
This is a small gallery of work being done around the world.

Andrew Camenisch

Alceu Baptisto

John Feather

Pasha Ivanov

Michael Koch

Louis Lefebvre & Pascal Savignac

. . . [models] . . . [textures]

Ulf Lundgren

Ren Morel

Caleb "Cro" Owens

Marco Patrito

Francois Rimasson

Michel Roger

Steven Stahlberg

If you know someone who should be in this gallery feel free to let me know.

theory overview modeling theory approaches

process setup NURBS polys/subds texturing animation resources features close-up reference/links gallery pics and movies

I invite all comments and criticism.


copyright Andrew Camenisch | 2001 Please respect my ownership and do not copy text or illustrations from this site without written consent. Thanks.

// // // // // // // // // // // // // // // // // // // // // // // // // // // // // // //

Alias|Wavefront Script File MODIFY THIS AT YOUR OWN RISK Creation Date: Jan 9, 1998 Author: js Description: Type: tangentCVWin <-- creates a window to use this script interactively. Basically it's a way of moving surface cv's to help create tangency. Notes: it's really helpful for corners where 3 or more surfaces come together and you need to keep tangency between those surfaces. Example: For a corner where 4 surfaces come together: while in CV component selection mode select the 4 cv's which are on the corners of each surface and press the "cv's that move" button (you should have done the "tangentCVWin" command before this). Then select the surrounding cvs and press the "cv's that don't" button. Then press the "Make cv's tangent" button at the bottom of the window - this will move those first cv's so their xyz position is an average of all the other cv's you select. Input Arguments: None. Return Value: None.

global proc string[] cvSel () { // This procedure reads in a selection of cv's & outputs their // names string $objs[1]; $objs = `ls -sl`; string $result[1]; int $counter = 0; for ($z = 0; $z < size($objs); $z++) { string $tmp = $objs[$z]; string $tmp2[1]; string $object;

// replace ".cv" with "." for ease of tokenizing string $tmp3 = `substitute ".cv" $tmp "."`; tokenize($tmp3, ".", $tmp2); $object = $tmp2[0]; string $sets[1]; tokenize ($tmp2[1], "][", $sets); // separate them into min & max string $u[1]; string $v[1]; tokenize ($sets[0], ":", $u); tokenize ($sets[1], ":", $v); clear ($tmp2); clear ($sets); int $umin; int $umax; int $vmin; int $vmax; $umin = $u[0]; if (size($u[1]) != 0 ) { $umax = $u[1]; } else { $umax = $umin; } int $vmin = $v[0]; if (size($v[1]) != 0) { $vmax = $v[1]; } else { $vmax = $vmin; } for ($x = $umin; $x <= $umax; $x++) { for ($y = $vmin; $y <= $vmax; $y++) { $result[$counter] = ($object + ".cv["+$x +"]["+$y+"]"); $counter = $counter+1; } } } clear ($objs); return ($result);

} global proc makeTangent (string $moveCvs[], string $tangentCvs[]) { float $xAverage=0.0; float $yAverage=0.0; float $zAverage=0.0; for ($cv in $tangentCvs) { $currentPos = `xform -q -ws -t $cv`; $xAverage = $xAverage + $currentPos[0]; $yAverage = $yAverage + $currentPos[1]; $zAverage = $zAverage + $currentPos[2]; } $xAverage = $xAverage/size ($tangentCvs); $yAverage = $yAverage/size ($tangentCvs); $zAverage = $zAverage/size ($tangentCvs); for ($cv in $moveCvs) { move $xAverage $yAverage $zAverage $cv; } } global proc makeCVsTangent () { $moveCvs = `textScrollList -q -ai cvMoveCVTextScrollList`; $tangentCvs = `textScrollList -q -ai cvTangentCVTextScrollList`; makeTangent $moveCvs $tangentCvs; } global proc buildTangentCVWindow ( string $win ) { window -title "Make cv's tangent" $win; columnLayout topCVWindowLayout; rowColumnLayout -nc 2 -cw 1 200 -cw 2 200 cvRowColumnLayout; button -w 200 -l "cv's that move" cvReloadLeftButton; button -w 200 -l "cv's that don't" cvReloadRightButton; textScrollList -w 200

-h 300 -nr 12 cvMoveCVTextScrollList; textScrollList -w 200 -h 300 -nr 12 cvTangentCVTextScrollList; setParent topCVWindowLayout; button -w 400 -l "Make cv's tangent!!" makeTangentButton; } global proc loadCVs ( string $scrollList) { $selectedCvs = `cvSel`; textScrollList -e -ra $scrollList; for ($item in $selectedCvs) { textScrollList -e -a $item -w 200 -h 300 $scrollList; } textScrollList -e -w 200 -h 300 $scrollList; } global proc createCallBacksCVWindow () { button -e -c "loadCVs \"cvMoveCVTextScrollList\"" cvReloadLeftButton; button -e -c "loadCVs \"cvTangentCVTextScrollList\"" cvReloadRightButton; button -e -c makeCVsTangent makeTangentButton; } global proc tangentCVWin () { $win = "tangentCVWin"; if (!`window -exists $win`) { buildTangentCVWindow $win; createCallBacksCVWindow; } showWindow $win; }

//hullTangencyWin.mel: created by Jeffrey Wilson, 09.26.00 //This script is based off of tangentCVWin script created //by js @ A|W 01.09.98 // //edited by Jeffrey Wilson 09.26.00 // fix conflicts with tangencyCvWin.mel // // //This script will allow you generate tangency at a surface //seam or multiknot through the selection of the seam hulls //and the hulls adjacent to the seam. // //usage: hullTangency // (This will bring up the Surface Seam Continuity // window) // Load the cv hulls into the window: // select a single hull at the seam // press Seam Hull 1 Button // select the corresponding seam // (if multiknot, no need to select another hull) // press Seam Hull 2 Button // select adjacent hull to Seam Hull 1 // press Tangency Hull 1 // select adjacent hull to Seam Hull 2 // press Tangency Hull 2 // Execute "Create Tangency Along Seam" // (If the seam twists, execute the Untwist Seam button) //

global proc string[] hullSel (){ // This procedure reads in a selection of cv's & outputs their // names string $objs[1]; $objs = `ls -sl`; string $result[1]; int $counter = 0; for ($z = 0; $z < size($objs); $z++){ string $tmp = $objs[$z]; string $tmp2[1]; string $object; // replace ".cv" with "." for ease of tokenizing string $tmp3 = `substitute ".cv" $tmp "."`; tokenize($tmp3, ".", $tmp2);

$object = $tmp2[0]; string $sets[1]; tokenize ($tmp2[1], "][", $sets); // separate them into min & max string $u[1]; string $v[1]; tokenize ($sets[0], ":", $u); tokenize ($sets[1], ":", $v); clear ($tmp2); clear ($sets); int $umin; int $umax; int $vmin; int $vmax; $umin = $u[0]; if (size($u[1]) != 0 ){ $umax = $u[1]; } else{ $umax = $umin; } int $vmin = $v[0]; if (size($v[1]) != 0){ $vmax = $v[1]; } else{ $vmax = $vmin; } for ($x = $umin; $x <= $umax; $x++){ for ($y = $vmin; $y <= $vmax; $y++){ $result[$counter] = ($object + ".cv["+$x+"]["+$y+"]"); $counter = $counter+1; } } } clear ($objs); return ($result); } global proc makeHullTangents (){ $tangentCVs1 = `textScrollList -q -ai cvTangentCVTextScrollList1`; $tangentCVs2 = `textScrollList -q -ai cvTangentCVTextScrollList2`; $seamCVs1 = `textScrollList -q -ai cvSeamCVTextScrollList1`; $seamCVs2 = `textScrollList -q -ai cvSeamCVTextScrollList2`; for ($i = 0; $i < size($tangentCVs1); $i++){ $cvPos1 = `xform -q -ws -t $tangentCVs1[$i]`;

$cvPos2 = `xform -q -ws -t $tangentCVs2[$i]`; $avgX = ($cvPos1[0] + $cvPos2[0])/2; $avgY = ($cvPos1[1] + $cvPos2[1])/2; $avgZ = ($cvPos1[2] + $cvPos2[2])/2; //print ("position of tangency1 " + $cvPos1[0] + " " + $cvPos1[1] + " " + $cvPos1[2] + "\n"); //print ("position of tangency2 " + $cvPos2[0] + " " + $cvPos2[1] + " " + $cvPos2[2] + "\n"); //print ("avg position of seams " + $avgX + " " + $avgY + " " + $avgZ + "\n"); //print ("diff between points " + ($cvPos1[0] - $cvPos2[0]) + " " + ($cvPos1[1] + $cvPos2 [1]) + " " + ($cvPos1[2] + $cvPos2[2]) + "\n"); if (size($seamCVs1) > 0){ move $avgX $avgY $avgZ $seamCVs1[$i]; } if (size($seamCVs2) > 0){ move $avgX $avgY $avgZ $seamCVs2[$i]; } } } global proc unTwistHulls (){ $tangentCVs1 = `textScrollList -q -ai cvTangentCVTextScrollList1`; $tangentCVs2 = `textScrollList -q -ai cvTangentCVTextScrollList2`; $seamCVs1 = `textScrollList -q -ai cvSeamCVTextScrollList1`; $seamCVs2 = `textScrollList -q -ai cvSeamCVTextScrollList2`; undo makeHullTangents; $in = size($tangentCVs1); for ($i = 0; $i < size($tangentCVs1); $i++){ $in--; $cvPos1 = `xform -q -ws -t $tangentCVs1[$i]`; $cvPos2 = `xform -q -ws -t $tangentCVs2[$in]`; $avgX = ($cvPos1[0] + $cvPos2[0])/2; $avgY = ($cvPos1[1] + $cvPos2[1])/2; $avgZ = ($cvPos1[2] + $cvPos2[2])/2; //print ("position of tangency1 " + $cvPos1[0] + " " + $cvPos1[1] + " " + $cvPos1[2] + "\n"); //print ("position of tangency2 " + $cvPos2[0] + " " + $cvPos2[1] + " " + $cvPos2[2] + "\n"); //print ("avg position of seams " + $avgX + " " + $avgY + " " + $avgZ + "\n"); //print ("diff between points " + ($cvPos1[0] - $cvPos2[0]) + " " + ($cvPos1[1] + $cvPos2 [1]) + " " + ($cvPos1[2] + $cvPos2[2]) + "\n"); if (size($seamCVs1) > 0){ move $avgX $avgY $avgZ $seamCVs1[$i]; } if (size($seamCVs2) > 0){ move $avgX $avgY $avgZ $seamCVs2[$in]; } } } global proc buildHullTangencyWindow (string $win){ window -title "Surface Seam Continuity" $win; columnLayout topHULLWindowLayout;

rowColumnLayout -nc 4 -cw 1 200 -cw 2 200 -cw 3 200 -cw 4 200 cvRowColumnLayout; button -w 200 -l "Seam Hull 1" hullReloadButton1; button -w 200 -l "Seam Hull 2" hullReloadButton2; button -w 200 -l "Tangency Hull 1" hullReloadButton3; button -w 200 -l "Tangency Hull 2" hullReloadButton4; textScrollList -w 200 -h 150 -nr 12 cvSeamCVTextScrollList1; textScrollList -w 200 -h 150 -nr 12 cvSeamCVTextScrollList2; textScrollList -w 200 -h 150 -nr 12 cvTangentCVTextScrollList1; textScrollList -w 200 -h 150 -nr 12 cvTangentCVTextScrollList2; setParent topHULLWindowLayout; button -w 800 -l "Create Tangency Along Seam" makeHullTangentButton; button -w 800 -l "Untwist Seam" unTwistHullsButton; } global proc loadHulls (string $scrollList){ $selectedCvs = `hullSel`; textScrollList -e -ra $scrollList; for ($item in $selectedCvs){ textScrollList -e -a $item -w 200 -h 150 $scrollList; } textScrollList -e -w 200 -h 150 $scrollList; } global proc createCallBacksHullWindow (){ button -e -c "loadHulls \"cvSeamCVTextScrollList1\"" hullReloadButton1; button -e -c "loadHulls \"cvTangentCVTextScrollList1\"" hullReloadButton3; button -e -c "loadHulls \"cvSeamCVTextScrollList2\"" hullReloadButton2; button -e -c "loadHulls \"cvTangentCVTextScrollList2\"" hullReloadButton4;

button -e -c makeHullTangents makeHullTangentButton; button -e -c unTwistHulls unTwistHullsButton; } global proc hullTangencyWin (){ $win = "hullTangencyWin"; if (!`window -exists $win`) { buildHullTangencyWindow $win; createCallBacksHullWindow; } showWindow $win; }

// Select two CVs first as pins // any other cvs selected after the first two will be moved // onto the vector defined by the pins // useful for stitching surfaces together // the selection order is maintained so that the pick walker can be used after joining // by Matthew Gidney 1999 ( second real mel script!) // global proc mAlignCvs() { string $parents[] = `filterExpand -ex true -sm 28 `; int $numberOfCvs = `size $parents`; if ( $numberOfCvs < 3) { print("mAlign(): You must pick at least 3 Cv's to align \n"); return; } float $CVPin1[3] = `xform -q -worldSpace -t $parents[0]`; float $CVPin2[3] = `xform -q -worldSpace -t $parents[1]`; vector $origin = <<($CVPin1[0]),($CVPin1[1]),($CVPin1[2])>>; vector $rawVector = <<($CVPin2[0]-$CVPin1[0]),($CVPin2[1]-$CVPin1[1]),($CVPin2[2]$CVPin1[2])>>; vector $unitRawVector = unit ($rawVector); float $magRawVector = mag ($rawVector); for ($cnt = 2; $cnt < ($numberOfCvs); $cnt++) { float $CVPin3[3] = `xform -q -worldSpace -t $parents[$cnt]`; vector $testVector = <<($CVPin3[0]-$CVPin1[0]),($CVPin3[1]-$CVPin1[1]),($CVPin3[2]$CVPin1[2])>>; float $dotProduct = dot ( $rawVector, $testVector ); vector $moveVectorRel = $unitRawVector * ($dotProduct / $magRawVector); vector $moveVector = $moveVectorRel + $origin; xform -worldSpace -t ($moveVector.x) ($moveVector.y) ($moveVector.z) $parents[$cnt]; } }

// Select two CVs first as pins // any other cvs selected after the first two will be moved to the vector // defined by the first 2 pins // useful for stitching surfaces together // the selection order is maintained so that the pick walker can be used after joining // by Matthew Gidney 1999 ( first real mel script!) // global proc mMidPointCvs() { string $parents[] = `filterExpand -ex true -sm 28 `; int $numberOfCvs = `size $parents`; if ( $numberOfCvs < 3) { print("mMidPointCvs(): You must pick at least 3 Cv's to align \n"); return; } float $CVPin1[3] = `xform -q -worldSpace -t $parents[0]`; float $CVPin2[3] = `xform -q -worldSpace -t $parents[1]`; float $midPoint[3] = {(($CVPin1[0]+$CVPin2[0])/2),(($CVPin1[1]+$CVPin2[1])/2),(($CVPin1[2]+ $CVPin2[2])/2)}; for ($cnt = 2; $cnt < ($numberOfCvs); $cnt++) { xform -worldSpace -t $midPoint[0] $midPoint[1] $midPoint[2] $parents[$cnt]; } }

// Select three CVs first as pins // any other cvs selected after the first two will be moved // onto the plane defined by the pins // useful for smoothing corners where patches meet // by Matthew Gidney 2000 // global proc mPlanarCvs() { string $parents[] = `filterExpand -ex true -sm 28 `; int $numberOfCvs = `size $parents`; if ( $numberOfCvs < 4) { print("mPlanarCvs(): You must pick at least 4 Cv's to align \n"); return; } float $CVPin1[3] = `xform -q -worldSpace -t $parents[0]`; float $CVPin2[3] = `xform -q -worldSpace -t $parents[1]`; float $CVPin3[3] = `xform -q -worldSpace -t $parents[2]`; // first define the plane by its normal vector $pin1To2 = <<($CVPin1[0]-$CVPin2[0]),($CVPin1[1]-$CVPin2[1]),($CVPin1[2]-$CVPin2 [2])>>; vector $pin1To3 = <<($CVPin1[0]-$CVPin3[0]),($CVPin1[1]-$CVPin3[1]),($CVPin1[2]-$CVPin3 [2])>>; vector $normal = cross ($pin1To2,$pin1To3); float $magNormal = mag ($normal); vector $unitNormal = unit ($normal);

// next resolve the points onto the plane via the normal

for ($cnt = 3; $cnt < ($numberOfCvs); $cnt++) { float $point[3] = `xform -q -worldSpace -t $parents[$cnt]`; vector $pointToPin1 = <<($point[0]-$CVPin1[0]),($point[1]-$CVPin1[1]),($point[2]-$CVPin1 [2])>>; float $magPointToPin1 = mag ($pointToPin1); float $dotProduct = dot ( $normal,$pointToPin1); vector $moveVectorRel = -1.0 * $unitNormal * ($dotProduct / $magNormal); xform -worldSpace -r -t ($moveVectorRel.x) ($moveVectorRel.y) ($moveVectorRel.z) $parents[$cnt]; } }

// written by Becky Chow // work phone 650 628 7755 // home phone 650 743 6066 // robybeck@geeksville.com // becky.chow@ea.com // this script allows users to pick out all the vertex based // on vertex color range, from a complex polyset. // for instance, to pick out all the dark colored vertex, // just put dark gray in color box, and give it a 10% percentage range. // the script will automatically pick the darkest shaded vertex. //to install: //Copy "pickVertexC.mel" into your /maya/x.x/scripts folder. //Copy "pickVertexCIcon.bmp" to your /maya/x.x/prefs/icons folder. //open the "pickVertexC.mel" in script editor, //and execute it. then type "pickVertexCSetup"; global proc pc( ) { float $rgbup[2]; // upper end value of rgb float $rgbdn[2]; // lower end value of rgb float $rgb[2]; string $vtc[]; // temp list of vertex color within boundry int $polyn[]; // number of vertex in polyset int $i; // counter int $jv = 0; //counter for vertex number float $rgb[] ; //rgb value of vertex being evaluated float $tmpf; float $pcnt = `floatSliderGrp -q -value slider2`; $poly = `textFieldGrp -q - text textMeshName`; $polyn = `polyEvaluate -v $poly` ; $rgb0 = `colorSliderGrp -q -rgb slider1`; // seed color $pcnt = $pcnt / 100; $rgbup[0] = (($rgb0[0]+$pcnt) >= 1) ? 1: ($rgb0[0]+$pcnt); //red $rgbdn[0] = (($rgb0[0]-$pcnt) <= 0) ? 0: ($rgb0[0]-$pcnt); $rgbup[1] = (($rgb0[1]+$pcnt) >= 1) ? 1: ($rgb0[1]+$pcnt); //green $rgbdn[1] = (($rgb0[1]-$pcnt) <= 0) ? 0: ($rgb0[1]-$pcnt); $rgbup[2] = (($rgb0[2]+$pcnt) >= 1) ? 1: ($rgb0[2]+$pcnt); //blue $rgbdn[2] = (($rgb0[2]-$pcnt) <= 0) ? 0: ($rgb0[2]-$pcnt);

for ($i = 0; $i < $polyn[0] ; $i++ )

{ select -r ($poly + ".vtx[" + $i + "]"); $rgb = `polyColorPerVertex -q -rgb`; if (($rgb[0]>=$rgbdn[0]) && ($rgb[0]<=$rgbup[0]) && ($rgb[1]>=$rgbdn[1]) && ($rgb[1]<= $rgbup[1]) && ($rgb[2]>=$rgbdn[2]) && ($rgb[2]<=$rgbup[2])) { $vtc[$jv] = ($poly + ".vtx[" + $i + "]"); //making list $jv++ ; } } select -cl; for ($i = 0; $i < $jv; $i++ ) { select -add ($vtc[$i]); } print ("vertex selected: \n" ); print ($vtc ); } global proc mkwin () { window -widthHeight 800 300 -sizeable true -title "pickcolr" pickcolr; columnLayout; separator -w 210 -height 5 -style "none"; textFieldGrp -label "name of the Poly mesh :" - text "polyName" -columnWidth 1 125 -columnWidth 2 110 textMeshName; text -label " double click on color box to change seed color. "; text -label " it will pick all the vertex within the seed color range. "; text -label " 100% will let you select all the vertex with every color."; rowColumnLayout -numberOfColumns 2 -columnWidth 1 10 -columnWidth 2 240; text - label " "; colorSliderGrp -enable 1 -cw 1 80 -cw 2 30 -cw 3 100 -label "starting color" -rgb 0.5 0.5 0.5 slider1; setParent ..; rowColumnLayout -numberOfColumns 1 -columnWidth 1 400; floatSliderGrp -enable 1 -field 1

-value 0 -cw 1 180 -cw 2 40 -cw 3 100 -label "% range from the seed vertex color " -min 0 -max 100 -precision 1 slider2; setParent ..;

rowColumnLayout -numberOfColumns 2 -cw 1 120 -cw 2 60; text -label "press here to select "; button -label "select" -height 30 -command "pc" button1; setParent ..; rowColumnLayout -numberOfColumns 2 -cw 1 120 -cw 2 120; text -label " "; button -label "close window" -height 30 -align "center" -command ("deleteUI pickcolr "); showWindow; }

global proc pickVertexC () { global string $poly; global float $rgb0[]; if (`window -exists pickcolr`) deleteUI pickcolr; mkwin; }

//main

global proc pickVertexCSetup() { global string $gShelfTopLevel; if (`tabLayout -exists $gShelfTopLevel`) shelfButton -parent ($gShelfTopLevel + "|" + `tabLayout -q -st $gShelfTopLevel`) -command "source pickVertexC.mel; pickVertexC" -image1 "pickVertexCIcon.bmp"

-annotation "pick Vertex by vertex color Tool"; else error "You need a shelf for this Tool to work!"; }

/* This file downloaded from Highend3d.com '' '' Highend3d.com File Information: '' '' Script Name: PickVertex v0.0 '' Author: becky chow '' Last Updated: February 12, 2001 '' Update/Change this file at: '' http://www.highend3d.com/maya/mel/?section=polygon#831 '' '' Please do not alter any information above this line '' it is generated dynamically by Highend3d.com and will '' be changed automatically on any updates. */ // written by Becky Chow // work phone 650 628 7755 // home phone 650 743 6066 // robybeck@geeksville.com // becky.chow@ea.com // to use it, load the program into Mel script editor. // type "pickvertex" at the command prompt at the lower left // hand of the Maya screen. // this script allows users to pick out all the vertex based // on vertex color range, from a complex polyset. // for instance, to pick out all the dark colored vertex, // just put dark gray in color box, and give it a 10% percentage range. // the script will automatically pick the darkest shaded vertex. proc pc( ) { float $rgbup[2]; // upper end value of rgb float $rgbdn[2]; // lower end value of rgb float $rgb[2]; string $vtc[]; // temp list of vertex color within boundry int $polyn[]; // number of vertex in polyset int $i; // counter int $jv = 0; //counter for vertex number float $rgb[] ; //rgb value of vertex being evaluated float $tmpf; float $pcnt = `floatSliderGrp -q -value slider2`; $poly = `textFieldGrp -q - text textMeshName`; $polyn = `polyEvaluate -v $poly` ; $rgb0 = `colorSliderGrp -q -rgb slider1`; // seed color $pcnt = $pcnt / 100;

$rgbup[0] = (($rgb0[0]+$pcnt) >= 1) ? 1: ($rgb0[0]+$pcnt); //red $rgbdn[0] = (($rgb0[0]-$pcnt) <= 0) ? 0: ($rgb0[0]-$pcnt); $rgbup[1] = (($rgb0[1]+$pcnt) >= 1) ? 1: ($rgb0[1]+$pcnt); //green $rgbdn[1] = (($rgb0[1]-$pcnt) <= 0) ? 0: ($rgb0[1]-$pcnt); $rgbup[2] = (($rgb0[2]+$pcnt) >= 1) ? 1: ($rgb0[2]+$pcnt); //blue $rgbdn[2] = (($rgb0[2]-$pcnt) <= 0) ? 0: ($rgb0[2]-$pcnt);

for ($i = 0; $i < $polyn[0] ; $i++ ) { select -r ($poly + ".vtx[" + $i + "]"); $rgb = `polyColorPerVertex -q -rgb`; if (($rgb[0]>=$rgbdn[0]) && ($rgb[0]<=$rgbup[0]) && ($rgb[1]>=$rgbdn[1]) && ($rgb[1]<= $rgbup[1]) && ($rgb[2]>=$rgbdn[2]) && ($rgb[2]<=$rgbup[2])) { $vtc[$jv] = ($poly + ".vtx[" + $i + "]"); //making list $jv++ ; } } select -cl; for ($i = 0; $i < $jv; $i++ ) { select -add ($vtc[$i]); } print ("vertex selected: \n" ); print ($vtc ); } proc mkwin () { window -widthHeight 800 300 -sizeable true -title "pickcolr" pickcolr; columnLayout; separator -w 210 -height 5 -style "none"; textFieldGrp -label "name of the Poly mesh :" - text "polyName" -columnWidth 1 125 -columnWidth 2 110 textMeshName; text -label " double click on color box to change seed color. "; text -label " it will pick all the vertex within the seed color range. "; text -label " 100% will let you select all the vertex with every color."; rowColumnLayout -numberOfColumns 2 -columnWidth 1 10 -columnWidth 2 240; text - label " ";

colorSliderGrp -enable 1 -cw 1 80 -cw 2 30 -cw 3 100 -label "starting color" -rgb 0.5 0.5 0.5 slider1; setParent ..; rowColumnLayout -numberOfColumns 1 -columnWidth 1 400; floatSliderGrp -enable 1 -field 1 -value 0 -cw 1 180 -cw 2 40 -cw 3 100 -label "% range from the seed vertex color " -min 0 -max 100 -precision 1 slider2; setParent ..;

rowColumnLayout -numberOfColumns 2 -cw 1 120 -cw 2 60; text -label "press here to select "; button -label "select" -height 30 -command "pc" button1; setParent ..; rowColumnLayout -numberOfColumns 2 -cw 1 120 -cw 2 120; text -label " "; button -label "close window" -height 30 -align "center" -command ("deleteUI pickcolr "); showWindow; }

global proc pickvertex () { global string $poly; global float $rgb0[]; if (`window -exists pickcolr`) deleteUI pickcolr; mkwin;