Material Series
Version 1.0, Updated June 2025 using Octane 2025.2 and Cinema 4D 2025.2 ~7,000 words, average read time: 28 min
About this guide
This is a primer on how texture projection (both simple and UV) works in the 3D world. It’s DCC and render engine agnostic, so regardless of what tools you’re using, it should be informative.
Intro
Back in the early days of 3D graphics when RAM and processing power was a serious premium, a method of getting more detail on a 3D model without adding more polygons and difficult calculations was needed. The solution the engineers came up with was basically cheating by adding 2D images to the polygon mesh so they looked more interesting and realistic without beating on the CPU (or later GPU). There are several methods of doing this, but overall the process was dubbed “texture projection” (or sometimes texture mapping, depending on the app).
Even though we have vastly more computing power than they did back then, texture projection is still the most commonly used way to add extra detail to models efficiently so we can put the rest of the resources into doing fancy light calculations and displacement and such.
In this guide we’re going to look at the most common types of projection and weigh the pros and cons of them.
Important: Throughout this guide, we’re going to be using the term “projector” and see representations of physical projectors in the illustrations.This is not how it actually works, but it’s close enough that it can give us a frame of reference for what’s happening and allow us to predict what the settings will do when we go to change them. When we see “projector” without the quotes, we just want to bear in mind that it’s for visualization purposes only and not expect to see one hovering around our scene like we see representations of lights or cameras.
Part I
Texture Basics
Coordinate Spaces
Every 3D app needs some sort of system to keep track of where all the objects are as they move around the world in the scene.
A coordinate space is a unit-based grid system that consists of an origin point and two or three axes which are perpendicular to one another. Usually these are visually represented by an axis widget with two or three arrows that shows which way each axis goes.
But wait - we may be thinking - isn’t this guide about textures? It’ll make sense soon, these are just concepts we need to have fresh in our heads so the texture thing makes more sense. Never hurts to review the basics!
World & Object Spaces
In every scene, there’s a World Space (or Global Coordinate Space). A point in space known as the Origin is chosen by the app which is where all three world axes meet (0,0,0 units). Objects like lights, cameras, models, etc., in the scene can be located using a set of coordinates relative to the Origin along all three of the axes.
The axes used in 3D are labeled X, Y, and Z. X usually goes “left to right”. In some apps like C4D and Maya, Y is “up” and “down”, and Z is “back” and “front”. In others like Blender and CAD programs, Y and Z are swapped.
Each object ALSO has an origin and three of its own axes that define the Object Space (or Local Coordinate Space). This allows us to nest other objects within that coordinate space so it’s not a mathematical nightmare for us users to do something like orbit one object around another. Object Space axes are also labeled X, Y, and Z.
World & object spaces are (theoretically) infinitely large, - the numbers on the axes just keep counting up forever.
Texture Spaces
Textures also have their own coordinate space that exists separately from both the world and object spaces. Texture spaces can be either 2D or 3D.
A 2D texture coordinate space - referred to as UV space - exists on a 2D plane where the origin is at the bottom left corner. The axes are U (goes along the bottom) and V (goes up the side.
The 3D version is called UVW space. It’s similar to UV, only it has a third dimension (W) that goes back perpendicular from the UV plane.
The axes in texture spaces are finite (“bounded” is the tech term). The origin is 0,0 (or 0,0,0), but the maximum any axis can go is 1 , so we can end up with coordinates like 0.257 x 0.922 (not that we really ever see the actual coordinates). Since the spaces are resolution-independent, we can put any size texture in it and it conforms to a 0-1 scale, with 1 being the maximum height, width, or depth of the data on that axis.
2D vs 3D Textures
2D textures only have data in 2 dimensions (U and V), so they live in the UV coordinate space. We’re all familiar with these because when we grab any random PNG or JPEG on our hard drive or the internet, we only see a flat image. If we turn it in 3D space, the rest of the image isn’t secretly hiding behind it - it’s just a 2D slice.
Typically when a 2D texture is put in a 3D world, it just repeats the edge pixels on W (seen in pane 1 above). If we were to take a slice anywhere along the W axis, it would be the same as if we were looking at the front or the back of the projection. This is like extruded pasta - the hole pattern is always the same no matter where we cut the noodle.
Some 3D apps and engines (Octane included) support 3D textures as well, and that can cause a lot of strife, monitor punching, and misguided bug reporting if we don’t know what’s going on.
3D textures contain data in the U, V, AND W dimensions, so they live in the UVW coordinate space. They typically have to be procedurally generated, but there are some cases like an MRI scan where 3D data is actually scanned and recorded. Most commercial 3D engines meant for motion graphics and visualization don’t support scanned 3D data, just generated 3D textures.
If we were to drop a 3D texture in 3D space and take a slice of it along W, it would be different from the front or back, because the texture has 3D data. It’s like a block of Swiss cheese - the hole pattern is always different depending on where we cut.
Texture sizes in 3D space
So how does it work if we have a bounded texture space (0-1 on each axis) that we throw into an infinite world measured in mm, cm, miles, football fields, etc?
Projectors exist at a particular location in world space and have a direction that they’re projecting in, but they aren’t aware of the makeup of the geometry they’re projecting on. In other words, the same texture that’s put on several objects does not ‘scale to fit’ each object - it’ll appear larger on smaller models and smaller on larger models.
Because of this, textures have to be given a default size in 3D space by the app/engine. So what happens if the default texture size doesn’t match the size of the geometry it’s projecting on?
If it’s a bitmap 2D texture, most of the time an engine will simply repeat (tile) it in both U and V directions if the texture is smaller than the object. If it’s larger than the mesh, it will clip. Most engines will give us the ability to manually scale the texture to try to match the object, and also provide alternate behaviors for what happens when the end of the tile is reached.
If it’s a generated 2D or 3D texture, the engine will just keep generating more data as the object scales. It’s important to note that the texture isn’t scaling, there’s just more of it. Sometimes that means tiling, sometimes it means more randomized pattern - it really depends on how the generator was coded.
UV projection does scale, but it won’t make sense why until part III of this guide.
Texture Projection
Now that we’re up to speed on textures and how they exist in space, let’s put it all together to establish texture projection in our heads.
Essentially what we’re doing is taking a texture which exists in UV or UVW space, loading it into one or more projectors, and shining it on a mesh in 3D (XYZ) space. Wherever the projection lands on the model is what the texture will look like on those polygons.
The actual programming is more complex than that, but this is a good analogy that our artist brains can latch on to and understand.
Let’s take a look at a simple example.
If we were to bring a movie projector into 3D in a foggy room and project a checkerboard pattern it at a flat screen, the projection itself as it went through the fog would look like streaky black and white lines up until it hit the screen, and then we’d see a flat checkerboard image. No matter how much we shift the projector toward or away from the wall, the checkerboard pattern stays the same.
Now let’s imagine it’s the future, and our projector could project 3D images. A 3D checkerboard texture projects black and white cubes instead of streaks. The projection itself in the fog looks like boxes, and when it hits the same screen, it still looks like a flat 2D checkerboard. Difference is, if we were to shift the projection toward the screen, eventually a different set of cubes would intersect the screen and the 2D projected grid image would look inverted.
Important: In the real world, shifting the projector toward the wall would make the projected image smaller and pulling it back would make the image larger, but in 3D texture projection is parallel, so it stays the same size regardless of where the projector is in space compared to the model.
For more information on parallel projection, check out this guide (search for “parallel”.
Now, when we place a curved surface in the path of a projector with a 2D texture, the pixels continue indefinitely until they intersect with the geometry, and that’s what determines the color for that part of the model (panel 1 above). It distorts more as it goes around the curves because the rays are parallel and are hitting the geometry at a more severe angle, but this is more or less what we’d expect to happen.
When we put the same surface in the path of a 3D texture, we get something unexpected. The 3D texture continues repeating and wherever THAT happens to intersect with the model is what color that area is going to be.
So in panel 2, it looks bizarre because the projection cubes keep alternating black and white, and depending on where in space that part of the model is, it’ll either get black or white pixels. If we were to shift the texture on W (toward or away from the model), we’ll get a totally different pattern on the mesh because it’ll intersect the cube matrix in different locations.
To put it another way, if we were to construct a cube in the real world by gluing together alternating smaller dark and light wooden cubes and then chucked the whole thing into a CNC machine and had it carve away material to form a sphere, we’d see the same type of pattern.
Section Wrap
What we just saw in that checkerboard example is pretty much how texture projection works in a 3D engine. One or more “projectors” project one or more images at the model and give the illusion that the model is textured. It’s not just the color though - maps like bump, normal, displacement, roughness, etc all operate the same way.
We normally don’t see the projector or the streaks or anything in the 3D app (unless we’re building visualizations for a guide on texture projection ;) ), but we do see the results of the projected image (from UV/UVW space) on the geometry (in Global or more often Object space).
There are several standard projection methods in the 3D world, and a bunch of unique ones as well. None of them are the best one, otherwise we wouldn’t need a guide. Depending on the projection method, the texture will distort in different ways and require more or less fiddling and pre-prep to get a usable result, which is what the rest of this guide is going to be about.
Part II
Simple Projections
The easiest and most basic projections are geometric ones based on simple primitives (flat or planar, box, spherical & cylindrical). These are fast to set up and apply, and definitely have their uses, but aren’t super versatile for most complex geometry because of how they distort when they hit the mesh.
Flat Projection
Flat projection is the simplest of all. It’s essentially what we looked at in the last part of this guide: It can be thought of as a single, fixed projector in space that shines a single image along one axis at our model.
As we can see above in the first panel, if our model is a planar (flat) 2D object like a projection screen or a canvas), then great, no problem - it projects perfectly as long as the projector is perpendicular to the flat face of the model.
If our model is 3D, this starts to fall apart a bit. If it’s something like the cube in panel 2 above, the face the projector is perpendicular to receives a perfectly undistorted image. The sides are usually unusable though because the pixels that are hitting them just keep going back infinitely in space in the direction the projection was traveling (like we saw in part I when we put a 2D texture in 3D space). If we look at the back, the texture will still appear undistorted, but it’ll be a mirror image of the front (all the text will be reversed, etc.).
If we think about setting up a real world projector hitting a screen, and go around the back of the screen, it makes sense why this is happening.
If we move the projector relative to the model like in the third panel above, it will distort on all sides.
Finally if our model is organic like the bust in the last panel, then the flattest parts facing the projector (chest area) may be passable, but we’re going to run into distortion problems around the curves of the head, especially on the sides and top because the flat projection can’t wrap around the model properly.
Box (or Cubic) Projection
This is similar to flat projection, only now we have six different projectors projecting the exact same image in a cube formation around our model.
The 3D engine takes a look at the normals on the mesh and determines - based on which way the normal is facing - which projector it will use for that polygon.
Cubic projection solves some of the issues that flat projection introduced, notably that the sides of a 3D model don’t get as much distortion as they did. When the normal is at too severe of an angle, it switches to a different projector that does a better job. It also takes care of the reversing of the projection on the back side because it just switches to the rear projector.
The problem that it does introduce is seams.
If our model is cubic (think Minecraft or a cardboard box) and we don’t care about the fact that every side gets the same texture and we have no control over changing or rotating the texture individually on each side, box/cubic projection is a great choice (see panel 1 above).
It can also be good enough for organic textures. In panel 2 above, there are visible seams if we zoom in enough, but because the texture is so chaotic, it doesn’t matter unless we’re getting in close for a macro.
The third panel is where we can really see the issue. The distortion is far better than flat projection would be, but it’s unusable because there are strong noticeable seams at the points where projectors are switched, so the tiling almost never lines up. It’s especially bad in the shoulder because there are a lot of transition points due to the curvature.
Triplanar Projection
Triplanar Projection is a relatively recent-ish and very customizable simple projection that aims to solve two of the biggest problems introduced by box projection:
- It allows us to give each of the six projectors a different texture
- It gives us controls to blend the seams together
This makes it a good choice not only for texturing models where we don’t want to (or have to) worry about UVs (next chapter in this guide), and also in cases where we might just want different textures on the different sides of it without resorting to polygon selections or other trickery.
Depending on the textures used, the blends may or may not hold up to scrutiny very well in macro shots or on large portions of a hero model, but they’re often good enough for medium to far shots, or small parts of a larger model.
Blending is really good for more organic textures like the wood above - it’s not so great if we’re trying to line up portions of a sharp pattern - it just creates a messy seam instead of a sharp one.
The other thing about triplanar is that it takes some effort to set up, especially if we want different textures in the different projectors. Node organization skills are extremely helpful as the structure can get complex fast.
Cylindrical Projection
This operates just like how it sounds - instead of one plane or six planes, the projector’s “lens” is a giant cylinder with no caps that surrounds the model. The texture projects inward toward the model.
Cylindrical Projection wants a texture in a 2x1 aspect ratio for best results, but we can use a 1x1 (or any other aspect ratio) image and stretch or squash it later after we’ve applied it to our model. The texture should also repeat on X (horizontally) so we’re not left with an ugly vertical seam down the side of the model.
It was originally meant as a way to project a world map on a sphere, but better ways have been devised since because it causes all kinds of sizing issues (Greenland really isn’t larger than all of South America, it’s more like the size of just Argentina, but it looks huge on a cylindrically-projected map).
In 3D, this is fine for straight-sided cylindrical objects like soup cans and AA batteries, though there are a few issues, notably the caps. Since there’s no top projector, we get the same issue we had with flat projection - the edge pixels just repeat in toward the center of the model creating pinch points on the top and bottom. With any other topology, it will warp around the geometry and probably not look super great when we look at it up close.
The odds of needing this one are pretty slim unless there’s a particular look we’re after.
Spherical Projection
Spherical projection is similar to cylindrical projection, but the texture is projected from inside or outside of a sphere instead of a cylinder (go figure).
The real issue at hand is that spherical projection relies on the actual texture itself being generated or captured in a super wonky and distorted way so that when it’s re-projected in or on a sphere, it looks right. The nice part about this is that if captured/created right, there are no seams when it’s projected, and there’s no distortion, provided what we’re projecting it in or on is a perfect sphere. On any other model there still won’t be seams, but there will be distortion galore since it doesn’t take arbitrary curvature into account.
Equirectangular images meant to go on the inside of a sphere are usually used for HDRIs and IES lights. Easily available ones that go on the outside are generally planetary maps, though it would be possible to construct an equirectangular image for any spherical object (basketball, marble, etc) given the right tools.
Simple Projection Drawbacks
The most obvious drawback is that if our model is really curvy or highly detailed, we’re going to get distortion no matter which projection we pick. Each simple projection is great for a handful of things, but none of them are good for everything.
There’s also the problem of the texture sticking to the model. Simple projections map the UV/UVW space of the texture to either the Global or (more often) the Object’s coordinate space.
If it’s set to map to the Global Coordinate space, the projector stays still while the model moves through it, which creates a “swimming” effect for the textures (seen in panel 2 in the above illustration). This is almost always unwanted behavior.
If it’s set to map to the Object’s coordinate space, it’s pinned to the position and rotation of the model, so if the model moves or rotates (seen in panel 4 above), the texture goes with it. As we learned, simple projections are usually not aware of the geometry they’re attached to (at least they’re not in Octane or Cinema 4D). This means the texture will have to be scaled up or down to fit each individual model, and then if we go to animate the scale of the model, we’re in for a nasty surprise because the texture will not scale with the mesh (it’ll usually tile or clip instead, seen in panel 5 above).
There are some hacks and workarounds to deal with all this, but it’s not just as simple as slapping a texture on a model and calling it a day when things get more complex than a still life. There has to be a better way, right?
Part III
UV/UVW Projection
What is UV projection/mapping?
Eventually everyone gets to a point on their 3D learning path where we run across cases where simple projections just don’t cut it anymore. A more accurate system of projecting textures on a complex model with minimal distortion is needed.
The solution created for this is called UV mapping (or UV projection). It comes at the problem the opposite way of all the simple projections we saw earlier. Rather than try to finagle the texture projectors to look right in XYZ space, It conforms the mesh to UV space and does all the projecting there.
The first step of this process is creating a flat, 2D map of all the polygons that make up the mesh is created by “unwrapping” the 3D object. That flat map (called a UV map, or often “the model’s UV’s”) only exists in UV space, meaning we can modify it without affecting the actual 3D geometry. The UV map also travels with the model, so different apps can use it. It usually shows up as an add-on or aux data for a mesh (in C4D it’s in a tag attached to the object).
There are a variety of ways to lay out the UV polygons (or just “UVs” for short). Most of the time no perfect way to do it unless the model itself is 2D. Most of the rest of this guide is about figuring out what our target is and how to know what makes a good enough map (but not how to actually modify it, since the goal of this guide is to stay under 100,000 words).
The second part of the process involves taking the UV map and overlaying it on top of a texture. Both the UV map and the texture exist in 2D UV space, so all that’s needed is a flat projection that’s perpendicular to the UVs. Most of the time we don’t have or need any options surrounding this and the app just does it for us.
The pixels of the texture that overlap (or map to) any UV polygon in the flat map are what appears in the 3D version of that polygon in model after the material has been applied and set to UV Projection (Octane calls this “MeshUV” - it varies from app to app).
In the good(?) old days of CG, UV mapping (and especially unwrapping) was one of the leading causes of indigestion and repetitive headdesk syndrome in artists. Nowadays there are tons of tools to help with this process, and more coming up all the time. Rizom and Unwrella are solid paid choices, but there are free ones and most DCCs have some form of helper tool baked in, C4D included.
Like any other 3D->2D mapping system, the process literally can’t be perfect, so it still needs a human hand (and brain) to make critical decisions so the results come out the way we want.
Let’s have a look at what makes a good UV map.
A Tale of Two Workflows
There are several approaches to working with UV mapping, but they can be neatly divided into two buckets:
General Purpose UV Maps
This type of UV layout is for general use, meaning any all-purpose texture or texture set (like the ones found at ambientcg) should work fairly well with minimal distortion, and without having to alter the textures themselves. The tricky part is that it takes more time, consideration, and understanding up front to create a good all-purpose UV set, and it doesn’t really work well on every type of model.We tend to see these types of maps on relatively simple meshes that are mostly one material and don’t have a ton of tiny parts or details. Home decor objects, small electronics or smaller, separate parts of a complex larger model are good candidates for a general purpose UV layout.
In the case of the chair above, the seat and back are one object that has the UV map that we see in panel 1 above. The legs are separate objects with their own UV maps. This makes it easier to just apply different materials to different parts of the chair without having to do any complex masking. General purpose UV maps are fantastic for auditioning several materials sourced from different places on an object quickly.
Single Purpose UV Maps
Single purpose UV maps are built specifically to be paired with a single texture set. These types of maps are usually generated by an algorithm to maximize the space in the UV tile, but we can also manually create or tweak them.
The main advantage is that we can get very realistic detailing and not require so many different meshes for a model. The drawback is that it only typically works for a texture set that’s been custom built just for that map. We can make more texture sets for a model with a single purpose UV Map, but it’s a thing and we really need special projection painting tools like Substance Painter or 3D Coat to minimize distortion. Unlike a general purpose map, if we just start throwing different random textures on the model, it’ll look pretty bad most of the time.
We usually find these types of maps on much more complex meshes with lots of little details and several materials spread throughout a single mesh, like characters, video game props, etc. The example above really shows off what we can do with a single mesh, a single purpose UV map, and a texture set built specifically for that UV map.
Both methods generate a standardized UV map for a particular model, the UVs are just laid out differently. 3D engines don’t care what type of layout is used - as long as the texture set is right for the UV layout, it’ll look good.
In this guide, we’re going to focus on general purpose layouts because it’s a lot easier to understand how UV mapping really works when we can make heads or tails of the actual UV map just by looking at it.
So What’s the Problem?
Where to begin…
As any good cartographer will tell you, displaying a 3D object on a 2D map is a complex problem with several imperfect solutions offered over the last few thousand years. Unfortunately there isn’t a “right” way to do this, just different pros and cons to each.
Similar to making a flat map of the Earth out of a globe, when doing UV mapping, the two biggest issues we’re going to encounter are distortion and continuity, and they act as kind of a counterbalance system to one another. The better the continuity, the higher chance of distortion, and vice-versa.
Distortion is something we’re all familiar with and always try to avoid unless we’re doing it on purpose for artistic reasons. Basically stretching, squashing, or otherwise ruining the original intent of the image.
Continuity is how the texture flows from one polygon to another. If we apply a checkerboard to a sphere, we’d expect it to continue being a regular checkerboard across the surface and not suddenly change direction and twist and turn around the model, or have small patches of checkerboard here and there that don’t line up with each other.
Seams
Seams are always an issue with every projection method, it’s just part of the territory.
Let’s take a look at a real-world example for some context.
If we want to lay an entire orange skin on a flat table to see the whole thing at once, we need to peel it to get the skin off the tasty parts so it can lay flat. This involves making at least one cut, but more likely a few.
These cuts create seams when we put the skin back on. Since the bumps, divots, and imperfections in the orange skin were all there when we cut it, they’ll all line back up when we put it back together and the seams won’t be very noticeable (especially if we used a sharp knife).
If we hand-paint a pattern on the flattened orange skin and put it back on the innards, odds are good our pattern isn’t going to match up super well.
Suddenly, the seams are very noticeable because we just painted our texture on each piece and didn’t think about where the lines would connect along the edges.
This is the same thing that happens in UV mapping if our edges aren’t lined up just so. We need to decide where our cuts are going to be so the seams are as hidden as possible when any texture is applied.
Unwrapping algos have gotten pretty good picking decent cut lines, but they still don’t know how we’re going to display the model, so if we just go full auto, there’s a real possibility that there will be an ugly seam right across an important area as we turn the model.
Now, we could spend a bunch of time figuring out how it lines up when painting the pattern so the seams don’t show, and that’s an acceptable way to build a single-purpose texture set for a particular model (projection painting, basically), but then when we go to audition a bunch of different patterns, the whole thing suddenly becomes very time-intensive. Since we’re after reusability, we need to address the UVs, not the textures.
Layout
There’s more than one way to skin an orange, though.
We can try to get it all off in one shot and make several strategic cuts to flatten it out, or we can cut it into smaller parts and put it back together like a jigsaw puzzle.
This is similar in UV mapping. It’s rare that we can get the whole thing laid out in one big piece, but it’s possible on simpler models. It’s more likely that we’re going to have to separate out at least some of the polygons and make them into their own groups and have deliberate seams. In UV mapping, each individual group of connected polygons is called an island.
The standard UV map area itself is in a 1x1 aspect ratio, similar to most texture sets. In the above illustration, the dark gray area in the upper right of each example is the whole UV map, and the blue boxes represent each polygon on the model. We can lay out our islands however we like within the 1x1 area, but we need to keep a few things in mind:
- The polygons in the UV tile will map to whatever part of the texture it’s overlaying. If we shift the UVs, it will change which part of the texture we’ll see on those polygons. In panel 1 above, all the polygons in the UV map are only covering the eyes and beak of the bird, so that’s all that shows on the model.
- If two islands have shared edges on the model and we position the islands in different parts of the texture, the continuity of the texture will break at those seams like we see in panel 2 above.
- If we rotate a UV island, the texture will appear to rotate on those polygons relative to the other polygons on the model like we see in panel 3 above.
- If we change the scale of one island in relation to another, the island we enlarged will cover more pixels of the texture. In panel 4, since we shrunk one of the islands, the texture appears larger and blurrier because it’s overlapping fewer pixels
- Islands can overlap, but most of the time shouldn’t because that means multiple polygons are mapping to the same pixels in the texture which makes it less versatile for many different kinds of textures.
Resolution
Texture sets that we get online are usually available in multiple resolutions, but all typically in a 1x1 aspect ratio (512x512, 2048x2048, etc.). Our UV maps are also in a 1x1 aspect ratio, but since they’re based on polygons, they’re vector, so they’re infinitely scalable. This means that we can swap in textures at different resolutions and the same areas will still map to the same polygons, the higher res ones will just have more detail on each polygon and look sharper.
If our UV polygons are small on the map, they’ll cover fewer pixels on the texture. When we get too close to those polygons, the texture will get blurry or crunchy unless we drop in a higher res version of the texture.
Why would we want the UVs smaller? Well, for a model like the one above, we probably wouldn’t unless we’re trying to overlay a very specific part of the texture, but if we had a large, complex model with a lot of parts, each part would be its own island and have to be smaller on the tile so they all fit, and sized relative to one another so the texture doesn’t change size across polygons.
The trick here is to use textures that are only as good as we need them to be for how we’re going to view the model in the scene without going overboard. We don’t want to have to load in 16k texture sets when we could just optimize the UV map a bit better and get away with 4K or even 2K versions and save a bunch of VRAM and load time.
Alignment & Distortion
So now that we have a better framework, let’s revisit the continuity vs. distortion problem.
The simplest way to see this is with a section of a cone. In 3D, a cone is made up of several quads and/or triangles. In this example we’re going to lop the top off the cone and ignore the caps so we just have six quads. In order to form a cone in 3D space, these quads need to be trapezoids (smaller at the top than the bottom) - not rectangles - so when they connect up, the face is curved. So, when we flatten the polygons out to form a UV map, we have a few choices…
The default UV set for this type of shape in C4D distorts the UVs and stretches the traps back into rectangles to fill the map tile.
As we can see in panel 1 in the illustration above, this causes the texture to be continuous (all the checkers connect to each other up properly), and the texture wraps around the curvature of the model in both horizontally and vertically which is great. What’s not so great is that it’s badly distorted because the actual polygons are trapezoids, not rectangles, so there’s some stretching going and overall it doesn’t look good.
In panel 2, we can see what happens if we keep the trapezoids in their original aspect ratio, but break them into separate islands and line them along the bottom. The horizontal lines wrap properly around the curvature of the model, but vertically it’s a mess because the edges aren’t aligned so the seams don’t line up.
On top of that, while the UVs themselves are not distorted, because of how it lays on a regular grid texture, the smaller parts of the UVs (toward the top) are covering less of the texture than the bottom parts, and the texture looks larger at the top of the cone than the bottom. So, this isn’t a good solution either.
In panel 3, we have the UVs projected flat and undistorted, and also rotated so the shared edges are connected to form one big island. This keeps the continuity of the texture intact and undistorted, which is probably the best compromise we can hope for here while still being able to swap multiple textures out without having to alter them. The only real issue is that because the UVs are rotated and the texture is straight, we’ll never get it to follow the curvature of the model as it wraps around it.
The only way to get it perfect, as we see in panel 4, is to distort the texture to match the UV set. Now it wraps around the model perfectly and looks great, but the issue is that it only does that for this one custom texture which makes it difficult to audition a bunch of textures quickly without heavily modifying them.
Now, we need to keep in mind that this is a simple cone slice. If we imagine doing this for a sports car or a troll holding a battleaxe, we can start to really understand the complexity of this whole situation. It really is an art unto itself and relies on a lot of compromising to get something that will work for our purposes.
UDIMs
UDIMs are a (relatively) new add-on to UV mapping that allows a single model to have several UV tiles. Each tile is still in UV space, but now there are more than one of them, each one mapping to a different set of polygons on the model and able to overlay on a different texture set.
This solves the problem of needing ridiculously high resolution textures in order to accommodate a million little UV islands. By spreading them out among different tiles and then using a different texture set for each, we can use multiple lower resolution image sets (2k or 4k) to give each piece of the mesh enough detail without destroying the render times.
UDIMs are almost always found with single-purpose UV layouts and texture sets like we can see in the illustration above (thanks, Lee!), but it is possible to create a general purpose model with multiple UV tiles. The trick in both cases is to make sure the render engine and DCC supports them and know how to assign different tiles to different texture sets.
Wrap Up
If you made it through this whole guide, you should have a pretty good understanding of how textures are applied to models in most cases in the 3D world, and know some of the pitfalls to look out for.
In Conclusion…
If we can get away with simple projections, those are quick and easy to implement, but break when the model gets too complex. If we’re clever about it, we can probably find ways to force them to be good enough in many cases which may or may not be worth the effort.
If we want a model that can be used to audition lots of different textures and have them mostly look right, it needs a good general purpose UV map. That means finding the right set of compromises that minimize distortion and keep continuity across the most important parts while hiding seams as best as possible.
If that becomes impossible, we can break the model into different meshes and UV them individually, or find some other creative workaround that combines UV mapping and simple projections to get what we need.
If we know we are only going to have to texture a model once, we can use a single purpose UV map and customize the texture set to that UV set to get almost no distortion at the cost of reusability. If we need even more resolution for a particular model, we can explore UDIMs.
There are also more obscure projections we haven’t covered here since, honestly, this is probably enough to chew on for now.
The next guide in this series will be how Octane implements all this (estimated around July 2025, stay tuned).