Making game characters is a multidisciplinary endeavor that starts with designers creating a concept and goes through the process of blocking out, sculpting, retopologizing, UV unwrapping, texturing, rigging, and animating before being ready to go into a professional project. The process is complex, but Blender has pro-grade tools for making sure each step of the process is optimized and ready for real-time gameplay.
I’m a 3D artist with 5 years experience sculpting, unwrapping, texturing, and retopologizing models. My expertise is with paid tools like Maya, but the principles are the same. I’ll give a high-level overview of the process here with Blender’s specific user experience in mind. Read on to learn how a game character goes from concept to animated, game-ready asset.
1. Gather reference images
Gather reference images to develop an understanding of the major shapes in a character. References come from anywhere, whether real-life photos, online repositories, cartoons, or even other games. Getting references from more than one of these sources is often necessary to get the best idea of the type of character you’re working on.
A reason to get references from other games or media is to get an idea what is characteristic of the style you’re going for. Getting realistic photos of people is only going to go so far if you’re working on a cartoonish, cel-shaded game. Instead, look at Borderlands, Hi-Fi Rush, or Windwaker to see how other artists represented characters in your targeted style. When you don’t have a specific game in mind, another option is to generate images. AI is getting increasingly convincing and has become a solid source of references in specific styles, even if it isn’t at the level of a human artist.
3d.sk is a site with reference material for humans and animals, if realistic references are needed for your character. The large repository of real people helps with understanding anatomy. The human body is complex, and giving a character unique and notable features is a challenge without reference images. They ensure your character’s unique features match what the human brain expects to see in another human face. 3d.sk’s content includes 2D photos, 3D scans, and retopologized models. The site does have a subscription plan where users get a certain number of credits depending on the level of their subscription. Photos are cheap credit-wise, while retopologized models cost the most.
3d.sk is useful for references because images and models are tagged with helpful characteristics for a modeler. The site lets you filter details of eyes, arms, fingers, teeth, and most other body parts. The filtering further divides subjects by gender, age, and type of clothing. Even sorting by pose is available, making sure you have eyes on everything you need to reference.
2. Plan your character
Plan your character with the design team so the 3D artists know what the design goals are. Going through concepting first saves time by keeping the messy, up-front work on paper and leaving the most challenging modeling for after the designers have discussed a 2D concept. A designer first plans out a game character, then the concept artist creates sketches based on the designer’s brief and their own references.
Character designers and artists work together to create the character concept in the early stages. The character designer is the one who has a holistic view of the character. They decide their biography, their personality, what abilities they have in the game, what weapons and gear they use, and other distinguishing features. The character designer doesn’t create a full work of art, but creates a brief with reference images and descriptions that guide the artist’s next steps.
A concept artist meets with the character designer, then uses the brief to prepare a concept for the modelers to work with. A concept artist exists between the designers and rest of the art team to make sure a cohesive art style emerges. The earliest sketches from the art team are rough and penciled in to allow for iteration, but the concept artist creates a definitive guide for the 3D modelers to work on.
The process of building a concept changes from artist to artist. Some artists plan through writing, where they list the major characteristics of a character and work through the most important components before sketching, as Scott Flanders does. Others create a mood board, which is a collection of reference images. The images aren’t all direct references, but some capture the mood they’re going for or the kinds of colors they’ll use for the character.
The goal of the sketching process is to bring to life what the character’s going to be. The artist explores multiple angles and poses to discover what the right feel for the character is. At this stage, experimentation is followed by iteration with the design team. Sketches go back to designers, the artist makes adjustments, and they work at it until a cohesive, final concept is ready.
3. Create a base sculpt/mesh
Create a base sculpt/mesh in Blender using primitive shapes. The concept artist’s work acts as a reference image for building the character, and the artist builds up a simple mockup of it in 3D to make sure the proportions are correct before getting into detailed sculpting.
Download and open Blender and get the scene ready for modeling. Getting references into blender and annotating them is fairly simple. With a Blender scene open, hit Shift-A to open the mesh menu and then click on “Image.” “Add Background” lets you add an image that is only viewable from one angle, while “Add Reference” means it’s viewable from any angle.
The image is selectable by default, so go to the Object properties tab on the far right to keep it from getting in the way. Under “Visibility,” untick the checkbox “Selectable.” The reference image is still selectable through the properties panel, but not while you’re working in the scene.
Block out the character with basic shapes: cubes, spheres, and planes. Starting out with simple shapes ensures the character has the right proportions before sculpting all the details. Take each primitive and nudge, cut, combine, and extrude them until they have roughly the same proportions as the reference.
4. Model the face, hands, and other features
Model the face, hands, and other features at a low level of detail. Each piece doesn’t need to be connected to each other, but they’re still built up of primitive shapes at this stage. Blender’s Subdivision Surface modifier lets you up the poly count and use the sculpting tools to nudge the primitives into the right shape and size.
Blender’s Remesh tool helps combine the pieces into a single, sculptable mesh. The pieces need to be overlapping with no gaps, so make sure your blockout is clean and finished. Ctrl + J joins the objects into one, then hitting Ctrl + R in sculpt mode melds the individual pieces together into one model with a uniform surface. If the surface is too low or high detail, pressing R and then moving the mouse lets you set the level of detail for the new mesh.
Designing a model for rigging also begins at this stage. Rigging is the process of attaching bones to a model to allow the animator to pose it. The artist needs to know how the character is going to move and what the design’s needs are. Say you sculpt a character without a neck. If the rigger needs the head to turn, there’s no way for them to add a bone and deform the model without the model stretching unnaturally. If a character’s legs are too thick and close together, this also presents a problem. Weight painting is important for rigging because it tells the computer what bone each part of the model is connected to. Painting is incredibly difficult if parts of the model are nearly overlapping. The challenge is the same as if you were painting a tiny model’s armpits or inner thighs in real life.
5. Use sculpting tools to add detail
Use sculpting tools to add detail, since modeling detailed organic surfaces like skin isn’t practical with box modeling, the approach used for the blockout. Sculpting lets the artist modify models with brushes that mimic the way a sculptor molds clay. The user drags a mouse (or stylus on a tablet) to crease, flatten, build up, and mold the model. Blender’s sculpting features like Dynamic topology and customizable brushes make it a powerful tool once the user has enough practice with it.
Sculpting isn’t about creating something new, like in the blockout phase, but turning the concept into a convincing model. Blender comes with 50 built-in brushes for modifying a mesh’s fine details. The following brushes are useful for modeling limbs and detailed features.
- Clay strips add layered details which give the look of muscle mass
- Grab lets you push and pull to get large details into place
- Inflate quickly adds mass which you then sculpt with other tools
- Pinch creates transitions between sections like the edge of the nostrils or around the mouth
- Smooth blends out strokes after you’re done to make the results of other brushes like Clay Strips look more natural
Fine details for the face and hands like wrinkles, veins, and pores require more detailed brushes. The default brush Crease is commonly used for wrinkles, but other organic features require custom brushes to make. File > Append lets you import brushes for finer details like pores. Brush packs such as the one Arca IRTEM provides on ArtStation even include brushes for different sizes of pores, since the skin has subtle differences on different regions of the face.
Creating a detailed sculpt requires a high-poly model. Starting with a brush on the low-poly blockout is still going to result in a blocky final model. Blender has developed the following features for creating detailed sculpts.
- Multi-resolution sculpting
- Dynamic topology
- Voxel Remesh
Multires (Multi-resolution) sculpting is Blender’s oldest and simplest technique. The Multires modifier lets the artist subdivide the mesh, effectively quadrupling the amount of geometry to work with, in order to sculpt finer and finer details. The advantage of the approach is that it isn’t too hard on performance, but it makes building up the mesh challenging. The modifier also applies the same level of detail to the entire model, so using brushes to sculpt wrinkles on the face requires adding detail to the whole head.
Dynamic Topology responds to the issues of the multires modifier by letting the artist increase the number of vertices and faces on a mesh only where you’re using the brush. The dynamic level of detail makes adding fine details to the mesh much easier without overloading the computer, but Dynamic Topology is still a taxing feature. It also results in a mesh with a chaotic topology. Blender’s Voxel Remesh lets you retopologize the entire model when you’re done using Dynamic Topology to address this issue.
Voxel Remesh is high-performance and creates an even topology, so using it with Dynamic Topology shores up the weaknesses of both. We’ve seen that Remesh helps with combining a mesh, but it also helps with fixing the topology. A metaphor for understanding Voxel Remesh is that it puts a plaster cast around the model and creates a new one from that same mold. The new model cast from that mold has less geometry and will be less taxing for the computer. The disadvantage is that a model with holes in it won’t “take the mold,” so a model needs to be solid first. The tool also doesn’t work correctly when modifiers are applied to the mesh.
6. Create separate objects for clothing and accessories
Create separate objects for clothing and accessories using the same strategies as before. Modelers have the option either to use box modeling or sculpting for clothing. Box modeling is the process of taking a basic shape like a box and modifying the 3D model directly rather than with brushes. The process works well for simple shapes, although creating detailed cloth and textures still requires sculpting.
Blender’s Extract Mask feature simplifies the process of making clothing. The first step is painting a mask over the character with a brush in the shape of the final piece of clothing. Use the Extract Mask feature next, which creates a duplicate of everything you painted. So, painting a hand and then using this feature creates a hollow copy of the hand which you can use the Inflate brush on and quickly turn into a glove.
The sculpting workflow is again useful for modeling the accessories. The shirt you created with Extract Mask is going to have no 3D thickness, so using the Solidify modifier will add body to the model and make it ready for more sculpting. If you don’t add thickness, tools like Remesh won’t work.
7. Create retopology
Create retopology so the character is ready to rig and animate. Retopology is one of the most challenging parts of the process because the model needs to be ready for weight painting, rigging, texturing, and animation. The sculpted model has too many vertices for this process, and it’s too detailed to work in a game in real time anyway. To give a more formal definition, retopology is the process of creating a new model which preserves as much detail as possible while still being low-poly. The process is non-destructive, meaning you keep the sculpted model for more detail later.
Clean topology is important for both lighting and animation. A messy topology results in lighting issues. If there are faces with five or more sides (called N-gons), the shape of the surface will bend unpredictably. A mesh with bunched up faces and inconsistent details is going to look strange under lighting.
Clean topology makes sure areas that bend and deform look natural when they move. The challenge with rigging a character is getting the clothing right too. The clothing needs to have a similar shape and edge flow as the original model, otherwise the cloth and body won’t deform together correctly.
The following are methods of retopology in Blender
- Decimate
- Remesh
- Manual retopology
- Quad Remesher add-on
Decimation, like Remesh, uses algorithms to create a lower-poly version of the original. Neither one is effective for models that are going to be animated, since they don’t add extra detail around the parts of the model that deform. Decimate is the simplest modifier, as it only works by collapsing vertices together and then repositioning them as close as possible to the outer boundaries of the mesh. Details that poke out from the surface like ears or noses tend to collapse by this method.
Retopo the hard way is done with the Shrinkwrap modifier in Blender. Retopo this way is a pain, but it’s the cleanest way to make a model that is ready for animation in a professional production. Retopology in Blender starts with adding a plane, giving it the Shrinkwrap modifier so it sticks to the surface of the high-poly sculpted mesh, and building on it almost like you are making a cast of the original model
The Quad Remesher add-on is pricey but comes highly recommended for artists who encounter retopology frequently in their workflows. The creator of the add-on is Maxime Rouca, a veteran who worked on Zbrush and then founded his own company selling his retopo algorithm for several 3D modeling software packages. Quad Remesher lets you concentrate detail around certain areas unlike Blender’s default algorithms. The Adaptive Size feature increases the density of faces around curving or highly detailed areas. You’re able to customize the process by altering the Vertex Color to specify high and low detail areas.
The final goal of retopology is a model with extra detail around the joints but as little detail as possible elsewhere. The fine details from the sculpt don’t go away, though. The example below from Habil Karacelep’s upcoming game Blind Descent shows that the final model still preserves visual detail while reducing the poly count as much as possible. The reason is that the high quality sculpted model is baked, which is a process that first requires unwrapping the model.
8. Unwrap the low-poly model
Unwrap the low-poly model by selecting seams where the computer will unwrap and flatten it out. A model needs to be unwrapped so 2D textures get correctly mapped to its surface. The next step is either to use Blender’s built-in UV editor tools to unwrap the model, or use add-ons to make its functionality closer to professional tools.
The first step is to add seams around hidden parts of the model. Seams tell the computer how to unwrap the 3D model and place it flat. The reason for this is that you need to apply a 2D image to a 3D object, which you know is a complex task if you’ve looked at a Mercator projection map. The map becomes distorted around the seams, causing countries like Greenland to look much larger than they are. Artists add seams in hidden areas of the model because, like the poles in a Mercator projection, the most distortion occurs near seams. Selecting edges, right-clicking, and clicking “Mark Seam” tells Blender where to cut, or just press Ctrl + E.
Hit “Smart UV Project” from the UV menu next once you’ve added enough seams. There isn’t a set number of seams, but they ought to be as minimal as possible. Any seam is going to tell Blender that the mesh on either side isn’t going to be side-by-side in the final texture. Imagine your model is made of paper and you cut along all its seams. If the final result is something you’re able to completely flatten, then there are enough seams to unwrap it. If there are too many seams, though, the model is going to have some split and stretched sections.
Enter the UV editor either by creating a new area and selecting UV Editor from the top left, or hit the “UV Editing” layout at the top. A detailed look at every feature of the UV editor is beyond us here, but the UV editor works similarly to 3D modeling in Blender. You select individual vertices, edges, and faces, pushing them around so they fit on the image texture. Using a default texture like a grid helps you make sure the texture isn’t stretched or doesn’t have visible seams like the one in the image below.
Blender’s built-in UV tools make getting ideal UVs efficiently a challenge. Blender’s automatic unwrapping algorithm struggles with complex and organic shapes. The UV editor is also inconsistent when it comes to synced selection, which is when selecting in the UV Editor on the left selects the same part of the object on the right (like in the image above). Not every feature works with synced selection enabled, so it’s a guessing game for the end user whether a feature they need requires them to turn synced selection off. Other basic workflow tools, like copying UVs between similar objects or changing the order of multiple UVs on export make Blender less efficient than other tools.
Blender is customizable and has several add-ons available for making UV unwrapping easier. A developer who switched to Blender on the Subnautica 2 team has said that the lack of features in the UV Editor was the biggest adjustment new Blender users had to make on his team, and he highly recommended Zen UV. UV Packmaster 3 is another tool the community recommends which helps with changing the order of different parts of a model in the UV Editor.
9. Create and apply textures
Create and apply textures using texture painting and baking. Textures are the 2D images that define the color of the object, like the small dimples made by pores, the red of the skin around the cheeks, and the overlapping threads of cloth. Painting is used for images where you want complete control over the details, while baking is used when you create textures procedurally.
Texture painting gets you started with texturing your model. Many artists switch to Adobe’s Substance painter at this stage, since it’s purpose-built for painting details directly onto a model. Blender’s Texture Paint has few brushes and no default support for multiple layers, so those accustomed to other programs are going to find the switch difficult.
Blender’s texture painting mode is accessed just like Object Mode through the top of the interface. Texture painting lets you paint directly onto the object and have Blender handle matching it up to the UVs. Each brush lets you select a color or texture, and you paint it on with your mouse or stylus. Blender’s texture paint isn’t as advanced as other software like Substance painter, and it doesn’t support common features like layers by default. Ucupaint is a popular add-on for texture paint that lets you organize textures and shaders into layers if you stay in Blender for your workflow.
Creating custom brushes for Blender’s texture paint is relatively easy, but you must use third-party software for it. A brush is simply an image where the colored sections are full strength and the transparent sections are minimal strength. Free painting programs like Krita or GIMP let you paint on a transparent surface to create the desired brush. The format must support transparency, so exporting as a PNG is recommended.
The image textures aren’t the only textures a character model needs, but they also require a texture called a normal map which fake the detail from the original sculpt. Bake texture maps from the high-poly to the low-poly model using Blender’s built-in bake feature. Baking is the process of saving the shape details of an object to a 2D image. Saving the detail to an image lets you have the best of both worlds: a well-optimized model with few polygons but many small lighting details.
The step-by-step process for baking from the original sculpt to the final retopologized model is fairly simple and fits in this bulleted list.
- On the Properties tab to the right, set the Render Engine to Cycles
- Select the low-poly object and open the Material Node Editor. Add an “Image Texture” node, hit “New”, and add an image with a high enough resolution for your project
- Select the high-poly model, then Ctrl + Click the retopologized model
- Click on Bake in the Properties panel on the Render tab (looks like a Camera)
- Save the image texture with its baked normals
Normal maps work by making the lighting think there’s extra detail on the model when it’s actually low-poly. Each pixel in the image is a 3D X, Y, Z vector that tells light what direction to bounce. The two flat planes in the image below are the exact same as the previous image. All the detail in the version on the right comes from the normal map.
10. Rig the character (creating skeleton, painting weight, testing rig)
Rig the character, which is the process of attaching a skeleton to a 3D model so the animator is able to control the model like a puppet. The process involves creating a skeleton, using weight painting to make sure the bones deform the model correctly, using blend shapes for complex deformations like facial expressions, and using advanced tools like inverse kinematics to make the workflow faster.
The process starts with adding an Armature in the add menu. An armature has only one bone at first, which you extrude or duplicate into new bones. A final armature has a bone for each segment of a model that moves or rotates, just like a skeleton.
Blender’s tools for rigging include.
- Weight Painting
- Blend Shapes
- Drivers
- Inverse Kinematics
Rigging the model then requires telling Blender what parts of the model each bone moves. Blender has no way to know whether rotating a bone rotates the whole model or just one part. Weight painting is the tool for connecting vertices to a bone. The tool lets you paint what sections of a model each bone controls. First, add a vertex group in the Properties panel on the right and name each one after a bone. Then, go from Object Mode to Weight Paint mode. Select a vertex group, then use the brush to paint the part of the object that vertex group is supposed to deform. If the vertex group is for the foot bone, you paint the foot. The redder it is, the more influence that bone has; the less red, the less influence.
Another tool Blender has that’s necessary for smooth animations are shape keys, which are also known as blend shapes. Blend shapes define facial expressions like smiles, frowns, wide eyes, and so on. These complex expressions are difficult to capture with individual bones, so a rigger goes into the 3D model, modifies it directly to match the desired expression, then saves the new form as a blend shape.
The name for blend shapes in Blender is shape keys. In the object data panel, you first add a base shape key for the default, unchanged model. Then, you modify the 3D model to match an emotion; let’s say you make them smile. Blending between the original and the shape key lets the animator quickly create complex emotions
Drivers are a Blender feature that make controlling shape keys easier for animators. Drivers let you connect the value of a shape key to the rotation, location, or scale of a bone in the scene. An animator then has the option to move a bone up or down to make a character smile, frown, or communicate any other emotion they need to. This feature lets the animator stick to using the rig and avoid going into the menus to animate the model.
Inverse Kinematics (IK) is an important tool for animation that Blender supports with its IK modifier. Inverse kinematics predicts the rotation and positioning of a limb based on where the hand/foot is. If you’ve played Dark Souls, inverse kinematics is what makes the players legs and feet adjust position to stand on rocks or stairs instead of clipping into the ground. The GIF below shows what I mean: the section to the left is inverse kinematics, where moving the foot changes the positioning of the rest of the leg. The right is the opposite, working from the top of the leg down, which is much more tedious for an animator. A full tutorial of IK is out of scope here, but getting a basic system working is simple: add a bone next to the last bone in the chain, hit Shift + I, and a basic IK system will be in place.
11. Animate character
Animate the character using Blender’s Timeline and Dope Sheet for controlling the key poses. The basics of recording an animation in Blender are simple, although whole books are dedicated to the subject of creating convincing, visceral, and effective animation. The process involves, at its simplest, posing the model using the bones, saving the location in a keyframe, advancing time a little bit, and then saving the location in another keyframe.
Blender’s interpolation and dope sheet help with managing what happens between these keyframes. Interpolation is how Blender automatically moves the bones from one point to another between keyframes. Instead of moving the model every frame, you’re able to move it every few frames and let Blender handle moving from point A to point B.
The graph editor is crucial for convincing animation because it controls the speed at which Blender interpolates between actions. An object that moves at the same speed between poses is unnatural, and they need to follow the principle of animation “slow in and slow out” which is shown below. The graph lets you customize this process, maybe choosing a slow lead-in but a speed lead out in cases where it’s necessary.
The Dope Sheet is an interface which lists all the actions your character takes. While the Dope Sheet works for dealing with individual keyframes, it’s most helpful for getting timing right. The sheet shows all the actions, how long they take, and when they happen relative to each other. The user selects and adjusts each action to create a refined animation without having to recapture the movement every time
12. Export the character and character animations to game engine
Export the character and character animations to the game engine using the FBX file format. FBX is the most common file format for exporting to games. Unlike OBJ, FBX supports animations. Hitting File > Export > .FBX opens the export dialogue in Blender.
Make sure the scale and animations are set correctly before exporting. The scale needs to be consistent with other models and across the project, and is modifiable through the Export menu. One unit in Unity is a meter just like in Blender. Each animation also needs to be a separate, named action before exporting. Going into the Action Editor when animating and adding a new action for each animation (walking, idling, attacking) is necessary before exporting to an engine like Unity. The FBX file and the associated textures are all that you need to import into your game engine’s asset browser, but make sure to tick the “Include Animation” box.