This is the second part of the displacement series. If you haven’t already, it’s a good idea to at least skim through Introduction to Displacement in Octane for Cinema 4D
About This Guide
It’s often not enough to just know how to plug a texture into a displacement node and flip a few switches. A lot about what makes displacement actually look good relies on the model itself, and how it’s set up. This guide is going to do a deep dive into all the factors in the model and source texture that affect displacement and how to fix common problems.
There are four things that need to be addressed before displacement will look right.
Polygon density - resolution of the displacement - how sharp and smooth it will look.
Mesh Topology - controls the quality as it moves from one part of the mesh to another.
Projection/UV Mapping - controls distortion of the texture.
Quality of the source image - avoid JPEG!
Each one of these is a whole artform unto itself, so we’re going to try really hard here to keep it brief and just explain the parts of these three concepts that relate to displacement in particular.
The overall amount of geometry Octane has at its disposal over a certain surface area has the largest impact on how good displacement will look.
Think of a flat 2D image. If the resolution is too low, you start losing detail, and eventually can’t make out what the image even is. As the resolution is raised, it looks better and better until it hits a point where the eye is unable to distinguish improvements. But if you zoom in, and suddenly it looks bad again until you raise the resolution enough so the eye is satisfied. All these extra pixels come at a cost, usually in processing time and file size, so the goal is always to balance the amount of pixels with the distance the image will be viewed at in its final format.
This is the same in displacement, only instead of pixels, we’re dealing with polygons (and/or voxels if you’re using texture displacement). A low density mesh is not going to displace well and you’ll see artifacts and jaggy lines. Eventually as you up the polygon density it will look better and better until adding more doesn’t make any difference to your eye (it’ll make a big difference to your gpu though). When you zoom in, it may look bad again and you’ll have to up the resolution again until the result is acceptable.
Similar to the 2D image, the goal is to have as few polygons as possible while the displacement still looks good at the closest distance you get to the model in your scene.
The difference is that 2D images are pretty cheap computationally, and if you overdo the resolution, it usually doesn’t impact performance that much. If you overdo the resolution (polycount) on a 3D model too much, you’re going to be in a world of pain.
So how do you control the polygon density? Usually it’s some combination of getting your base mesh to a high enough density in C4D and/or further subdividing it on the CPU using a Subdivision Surface object, or on the GPU using one of two methods Octane supplies.
Octane triangulates any mesh fed into it. If you build a model with a combination of triangles, quads and n-gons in C4D and start the live viewer, Octane will convert all of that to triangles first before doing anything else. This is why with lower density meshes, there might be cut corners or zigzag geometry if you have sharp geometric shapes in your displacement where you might expect a smoother extruded look.
This also explains why when you create a model with a certain amount of quads, Octane will show twice as many polygons as you think you have. Triangulation isn’t necessarily good or bad, you just want to be aware that this is what’s happening.
Base Mesh Density
More often than not when you’re using displacement, you will have a base mesh that is fed into some sort of procedural subdivision system in order to get enough polygons to properly displace. This is one place where texture and vertex displacement are very different.
Vertex displacement wants very high density geometry, and is quite happy to take as many polygons as you can throw at it. Most of the time you’ll have to further subdivide the base mesh to get a smooth enough result, and sometimes (extreme close up shots for example) it may not be possible or feasible due to the lack of built-in smoothing. In those cases, consider texture displacement instead.
Texture displacement is best with a medium-high density base mesh. Remember, it further subdivides the base mesh in the voxel grid at the resolution you set, so you’ll still see something even if you only have one single polygon. That something won’t look great, and will cause all manner of errors and artifacts (the most common being the “clipping” or “lost polygon” issue).
It’s also possible to go too dense with the base mesh (mostly seen around small bevels and other very tight areas of the base mesh prior to subdivision), which causes different weird errors and artifacts.
How do you know what’s too dense or not dense enough? Iterating, practice, experience. So much of this relies on the map itself, how large the model is, how far you’re going to be from it.
Subdividing the mesh
Both Octane and Cinema 4D have procedural methods of subdividing an object. Subdividing simply means taking each polygon and splitting it in even pieces.
The great part about this is that it’s all procedural, so you can change the density if you decide you need to get closer to or further from the object. You can also turn off the subdividing until you need it, making the scene more manageable as you’re moving things around or adjusting other parts of it.
Subdivision is measured in levels. Each level divides each polygon by the number of sides it has (a triangle becomes 3 quads, a quad becomes four quads, a seven-sided n-gon becomes seven quads, etc). After the subdivision step, Octane then triangulates the quads, effectively doubling the number of polygons.
If you have one single quad and subdivide it once (level 1), you end up with four quads. If you go to level 2, it takes the 4 quads from level 1 and splits each of those into four, so you end up with 16. Level 3 splits those 16 polygons in 4 to make 128, after that it’s 512, 2048... etc etc. You can see how this can add up really quickly as you get into the higher levels, and especially if you’re starting from an already dense base mesh.
Subdividing in the host app (C4D)
Cinema 4D can do this via a Subdivision Surface (SDS) object. The SDS can go to level 6, which can easily stall out or crash C4D if your base mesh had a decent amount of polygons to start with, so up the levels very carefully.
Doing the subdividing on this level has a few advantages - C4D allows for SDS Weighting, which makes it easier to control the silhouette of your shape without adding more cuts in the model. You can also see what the final shape will look like without rendering, since the effects of an SDS object shows up in the editor view. Finally, C4D offers a few more subdivision schemes than other methods, which may or may not come in handy.
Note: the C4D SDS has Editor subdivisions and Render subdivisions - Octane uses Editor subdivisions, so you can ignore the render ones.
Subdividing in the the engine (Octane)
Octane can also do the subdivision at render time in the Subdivision Group tab in the Octane Object tag. “At render time” means it doesn’t appear in the editor viewport or affect C4D’s performance. Instead, it does the subdivision once you hit the render button or start the Live Viewer. Depending on how many polygons we’re talking about, this could mean a long wait for it to process before it starts rendering the scene.
Similar to C4D’s Weight SDS tool, Octane has a Subdivision Sharpness field that alters the influence of the subdivision by sharpening corners. It doesn’t give you as much control over sharpening as C4D since you can’t specify individual edges for sharpening, but it’s good in a pinch if you don’t mind all the corners being sharpened the same amount.
Octane subdivisions take into account C4D subdivisions. If you put a 1x1 segment plane in a C4D subdivision surface object and set it to level 2, it creates 16 polygons. If you put an Octane object tag on that and set the tag’s subdivision level to 1, it takes the 16 polygons and divides them all into 4, giving you 128 quads. Octane triangulates all geometry, so it further splits each quad into two triangles, for a grand total of 256 polygons on the GPU (C4D still only sees the 16 that it created with the SDS). So be careful when adding an Octane object tag on an object that’s already in an C4D SDS.
Subdividing in the material
Octane’s displacement node also gives you control over subdivision. How it does it depends on whether you’re using texture or vertex displacement, and this is one of the biggest differences between the two.
Texture displacement requires a level of detail to subdivide the mesh. This doesn’t show up as actual subdivided polygons in Wire mode in the Live Viewer, as you can see above. Instead, it does behind-the-scenes subdivision calculations in a voxel grid, and you can see the results in the render, just not how it got there.
Generally speaking, this value should match the input source resolution, so if you have an 8K x 8K image you’re using as a displacement map, you should set the displacement resolution to 8192 x 8192.
You can choose to go higher or lower (between 256 and 8192), and sometimes upsampling a low resolution texture or downsampling a higher resolution one can yield either better or faster results depending on the base mesh.
The mesh still needs to be relatively dense, since this form of displacement is very prone to visual errors with a lower density mesh, especially with high-contrast maps with sharp corners. If you have soft blobby displacement and are viewing it from far away, you can get away with a low density mesh, but if you want a lot of detail, you need a lot of real polygons in addition to a finer voxel grid.
There’s a delicate balancing act here between the density of the base mesh, the amount of subdivision done in C4D, the amount of subdivision further done in the Octane Object tag, and then the amount of subdivision done even further by the voxel grid of the texture displacement node. It’s VERY EASY to go overboard and crash the app by setting all of these too high, so start low, and ramp up slowly until you find an acceptable result.
A good starting point is to set the displacement level of detail to the same as your texture size, set the filter, and then choose one place to do the geometry subdivision (c4d or the tag), start at level 0, then look at level 1, adjust the filter, go to level 2, adjust the filter, etc.
Some maps actually do benefit from setting the displacement resolution higher than the image resolution. If you’re getting errors with your 1k or 2k texture and have already tried adjusting the filter and making the mesh more dense, you may want to try upping this to see if it helps.
Vertex displacement gives you the option of setting subdivisions in the displacement node itself. This overrides the Octane Object Tag’s subdivision level, so you can’t combine the two. It still respects the C4D subdivision surface object, so again, be careful when applying this material to a new mesh that’s already in an SDS object.
Subdividing in the displacement tag is a kind of on-or-off situation. It doesn’t allow for control over sharpening and subdivision methods like the Object Tag does. Because it’s in the material, however, it’s good for porting between objects or projects without having to take the extra step of setting up an Octane Object tag.
Finding a good density is easier than texture displacement, but you should still start low, and slowly ramp up until you find an acceptable result.
The topology of your model refers to how the polygons are distributed across the mesh, and whether the polygons are triangles, quads or n-gons.
In general with SDS modeling, but especially with displacement, and especially x 100 with vertex displacement, the goal is to get the smoothest transition possible in your mesh between your smaller polygons and larger ones. This will ensure that the density difference isn’t too severe going from one part of the mesh to another and cause weird patches of the model with very pixelated shapes.
Texture displacement does a pretty good job of mitigating this due to the way it handles subdivisions using the voxel grid, but it has its own issues with uneven topology, and it still behaves far better (fewer artifacts and less tearing) with a more evenly distributed, higher density mesh.
The Catmull-Clark method of subdividing is pretty good at distributing the subdivided polygons to try to smooth out the transition, but there’s only so far it can go.
This illustration shows the problem off pretty well using Vertex displacement. Here we have a plane that was unevenly divided. There’s one small polygon at at each corner, four long skinny ones down the edges, and one giant one in the center. When it’s subdivided, the algo does its best to even out the distribution, but you can see that the displaced extrusion in the corner is a lot better quality than the ones directly next to them. The further away from the corner it gets, the lower density the mesh is and the worse the displacement looks. You’d have to up the density pretty considerably to make those center extrusions look good.
With just a few more cuts in the mesh, it helps distribute the polygons a bit better and it looks more consistent as seen in the second set of panels in the illustration above.
Of course the ideal way would be to just evenly divide the original mesh, and this is fantastic if you only ever use regular geometry in your work like planes and cubes, but once the model gets more complicated, you have to think more about polygon distribution to get the best effects without going crazy with the subdivisions and causing undue pain to your GPU.
Projection and UVs
Both vertex and texture displacement are controlled via a material, so they’re both subject to the same mapping issues found with any other channel.
Just as a quick refresher, every texture needs to somehow be projected onto the geometry so a render engine like Octane knows how to render it. There are basic projections like box, cylindrical, and flat which are quick and easy for geometry that’s shaped like those things. They fall apart when applied to curved surfaces, quick turns, and other complex shapes.
UV mapping (mesh UV) is still the best way to precisely control how textures map across more complex surfaces. In this process, you “unwrap” the geometry and lay all the polygons flat on a square area. You then take these flattened polygons and overlay them on top of your texture, and it creates a map that tells the engine which pixels of the textures go on which polygons.
It sounds easy enough, but it’s a very complicated process that is almost never perfect. Fortunately there are new tools and algorithms that are getting better every day to make this process more automated. You still have to make decisions about how the texture is going to fall across the model though, and this is a whole artform in and of itself that we don’t have time to cover here.
The takeaway here is that your models should be properly UV mapped to get the best results out of displacement (assuming your model is anything more complex than a bunch of cubes or cylinders).
In the illustration above, you can see that mesh UV projection on the shape on the left gives the result you’d expect. The symbols follow the shape, and there are no tiling issues and the distortion is kept to a minimum. Note that this model has been UV mapped properly.
The next three are using built-in projection types that don’t take UVs into account.
The second image is set up using Box (or cubic) projection. This is the most common of the non-UV projections and you can just think of it as six projectors projecting the texture at 90 degree angles from each other. This works well for boxy objects, and it’s actually pretty decent for the top part of this shape, but it falls apart toward the bottom where there’s a sharper bend, and the tiling gets screwy and shapes are cut off.
To the right of that is flat projection - this is just a 2D projection on one side of the model. It works really well if you have a flat surface like a TV screen or a tabletop, but as you can see here, once the shape starts bending away in 3D, the textures stretch and compress and get all squirrely.
The last one here is triplanar projection. This type of projection is extremely versatile and you can do some interesting things with it when combined with vertex displacement (it doesn’t work with texture displacement). Triplanar allows you to pick a different texture for each side of the model, and there are controls that blend the texture as it goes around corners. In this example, it does a somewhat ok job, but there are a few problems. Because there are two different kinds of bends, it’s hard to set the angle just right for where the blend should happen, so you end up with weird blips in the texture near the curves. Also, this object is using the same texture for both albedo and displacement. You can see the texture kind of “coming off” the extrusions because the projection isn’t lining up.
Triplanar is good in a pinch with a single texture for certain models and excellent for when there are several textures that you want to apply to different portions of the model depending on the angle, but it’s no replacement for uv mapping on a more complex model.
The importance of UVs
As mentioned before, you really want your shapes properly UV mapped for displacement to work at its best. The above illustration shows two cases where the mesh and material are exactly the same, but the UVs are different.
The example on the left is exactly what you don’t want.
The 3 separate overlapping UV islands make it so the texture doesn’t tile properly - a simple shape like this should have one continuous UV island so the texture flows nicely through it like it does on the right hand side of the illustration.
Each island is distorted into a square - in this model, if it’s going to be broken into multiple islands (which it shouldn’t), each island needs to keep the rectangular aspect ratio of the geometry it’s mapped to. If the aspect ratio of the islands is wrong, the texture will distort on X or Y and become really wide or really tall.
The islands are also sized improperly relative to one another - this means that the texture appears much larger or smaller on one island than the others. How large the UV island is compared to the texture makes a large difference too - the smaller the island, the less resolution will be applied to that area. This is one of the things to think about when setting the model up and deciding if you want to split off the area that’s being displaced so it has its own UV set.
The example on the right is the proper way to do this - there is one continuous island with no overlapping which means the texture tiles nicely across the whole surface. Obviously this doesn’t work on every model, and more complex ones need more complex UV layouts. The polygons are all in the right aspect ratio and relative size to one another, so all the symbols look like they are the same size.
There’s a lot more to UV mapping than this, and it gets complicated pretty fast, but it’s good to know how much it affects displacement and is absolutely worth learning more about.
So you’ve retopologized your model, UV mapped it, optimized your polygon density, and for some reason, your displacement still looks crunchy. The last thing to consider is the quality of your source image.
Every single displacement map is going to behave differently depending on the contents of the image, your geometry and UVs, and whether you’re using texture or vertex displacement. In some cases you can absolutely get away with a trashy low quality source image, but in other cases it’ll ruin your render.
Image quality can be broken down into resolution, bit depth, and compression. Certain file formats will support certain parameters of each of these. Color space is also a factor here - not so much for quality, but how the source image affects the final output.
You thought we were getting nerdy before? Hold our beer.
Generally the best file format to use for displacement maps (if you’re generating them yourself or have a choice) is EXR in 32-bit mode with PIZ compression.
PSD, TIF, and other high bit depth, lossless formats work too, but aren’t as efficient as EXR. PSD uses a very light compression, resulting in very large file sizes, and TIF uses ZIP or LZW compression, which are both more processor intensive and quite a bit larger than EXR with PIZ.
Avoid JPG like the plague if you have a choice. You’ll see why shortly. There are ways to try to mitigate what it does to displacement, and they work sometimes, but if you can get a better quality source image, definitely do that.
Note that not all features of PSD are supported in Octane (vector layers being pretty notorious in this regard.) Any PSD that’s using something like this will break displacement, so if you’re building a map from scratch in Photoshop and want to use PSD as your final map, always save a flattened copy. Alternatively, you can also export the file as an EXR or TIF, but remember to uncheck save with layers.
Source file resolution refers to the number of pixels in the image. It’s measured in two dimensions - height and width. Octane can accept up to 8k (8192x8192) textures for texture displacement, and higher for vertex displacement. There usually isn’t much of a need to go above 8k, and it eats up a LOT more resources for often very little gain (if any) in quality.
All modern file formats support at least 8k, so that’s not an issue.
This is a different resolution than the one found in the texture displacement node (not applicable in vertex displacement). The resolution we’re talking about here is the actual pixel dimensions of your 2D source image, not the voxel grid size you generate using the resolution dropdown in texture displacement.
There are tricks to smooth out low resolution images that may or may not work depending on the particular case. You can try upsampling, which is basically adding more polygons to the scene or covering it up with bump/normal or a busy albedo channel. For texture displacement you can try the filter, but if you get too close to the model, usually the only thing to do is source or create a higher resolution texture.
If you can get away with using a lower resolution image (say a 512px tile for a small repeating pattern), then definitely do so - this will help with the pre-render time. Again, this is going to depend on how close you are to the model, and how much detail you need, and the nature of the image itself.
If you see jagged and/or blurry edges in an area where it’s supposed to be sharp and smooth, it’s likely a resolution problem.
This refers to the amount of color data stored for each pixel. The more bits per pixel, the smoother the transition from one shade of gray to the next, and the smoother curves and diagonals will be. This applies to both curves along x and y, and often more importantly to the amount of “steps” your displacement has along the displaced axis.
8-bit images can store 256 levels of gray per pixel. This may sound like a lot, but as with 2d resolution, the closer in you get (or the more you push the displacement height), the more you see this fall apart. If you’re working with high contrast shapes with not a lot of shades of gray, you can get away with 8-bit. It becomes much more of an issue when there are gentle curves.
16-bit PNG or EXR files are often a good compromise - they have 65,536 levels of gray, and will look smooth enough in a lot of cases. 16-bit PNG displacement maps also have the advantage of being a lot easier to find out in the wild than EXR maps.
32-bit EXR/TIF/PSD files are the best quality, and will always produce the smoothest, sharpest displacement maps if everything else is set up right.
Bit depth can not be upsampled without very specific tools which may or may not give you the result you are after. If you take an 8-bit JPEG and simply save it as a 32-bit EXR, you’ll get the same problems you had with the JPEG. For this reason, it’s also not recommended to build displacement maps in apps that do not support 32-bit like Illustrator.
The steppiness can be mitigated a little in 8-bit images by using dithering when saving the image, but this creates an uneven surface and isn’t really suitable for closeup work or when you want your surface to be smooth. The texture displacement filter doesn’t affect this, and controlling the subdivisions in vertex displacement can work to varying degrees of success, but there’s no substitute for a higher bit depth image if you’re having problems like this.
If you see steppiness in the height (not jaggedness around the edges), it’s probably a bit depth problem.
There are two types of compression - lossy and lossless.
Lossy compression literally deletes data from your image when the file is saved. It can create artifacts that will show up in your displacement, and you can’t easily repair it. Worse yet, the more the image is re-saved, the worse the artifacts get. At best it just adds imperfections to a surface, at worst it will chip corners and produce weird spikes and other undesired results. JPEG compression is very noticeable, especially the more it’s compressed. DWAA/DWAB compression in EXR is far less noticeable, but still not ideal.
Seriously folks, stay away from JPEGs for displacement maps.
Lossless compression, as the name implies, doesn’t lose data, but the host application needs to spend some cycles uncompressing during the pre-render process it before it’s usable. PNG uses a compression algorithm that’s pretty tough on the processor and can add time to the pre-render stage, but it also keeps the file size extremely low which makes moving the image into VRAM faster - sometimes this is better, sometimes not. TIF is an older format and only uses ZIP or LZW compression algos, both of which are pretty inefficient. PIZ compression in an EXR is currently the favored algorithm for graphics - it has the best compression-to-speed ratio. One of the seemingly endless reasons to love EXR.
If you’re using a displacement map and your surface all of a sudden has weird divots and bumps where it’s supposed to be smooth, or sharp details are now crunchy, it’s probably a compression issue.
This is a whole other can of worms.
Displacement expects a non-color data (or Linear color space) input. Any textures or other nodes being fed in need to be converted to linear to displace correctly.
As of this writing, the default Color Space setting for the Image Texture node is Linear sRGB + legacy gamma, and a legacy gamma of 2.2. This is meant to properly handle sRGB images, which are far more common than linear ones.
Texture displacement takes care of the conversion automatically, so you don’t need to change settings in the ImageTexture node. You’ll often still get better results with a linear texture than an sRGB one though, especially if geometric accuracy is a concern.
Vertex displacement needs the ImageTexture node to be set properly. For images with an sRGB color profile (PNG, JPG, sometimes EXR/TIF/PSD), the default settings are correct. For images with a Linear color profile, straight lines will look curved because of the color space interpretation. The easiest way to fix this is to choose Non-color data from the dropdown. That will just use the linear values and there won’t be any distortion.
If you drop in an image with a displacement map that’s supposed to look like golf ball divots and you get gumdrop divots instead, it’s probably a color space issue.
For texture displacement, you need to run an image texture into the input of the displacement node. The Baking texture node lets you ‘bake down’ any kind of procedural texture into a flat image that will then serve as a texture for displacement. The baking texture gives you a lot of options.
Resolution is pretty self explanatory. Start with 4096 and then see if you need to move to 8192. If you can get away with 2048 or lower, that’s great too, but that would probably mean a small repeating tile with not a lot of intricate detail.
Type should be set to HDR Linear Space
As you can see, displacement is a thing. Hopefully you have enough knowledge now to get much better results and troubleshoot your projects. If not, look out for the next guide in the series which will be a detailed look at troubleshooting displacement.