Rendering Series
Version 1.0, Updated Nov 2025 using Octane 2025.4 and Cinema 4D 2026.0
~10,000 words, average read time: Forever, give or take a few hours.
About this guide
This guide exists to help clear up confusion about the term “linear”. Yeah, you’d think you wouldn’t need 10,000 words for this, but here we are.
Part I
Upfront Stuff
Introduction
If you’ve been using a 3D application for a little while, you’ve undoubtedly come across the term “linear” at least a few times (a lot more if you’ve read a bunch of these guides).
Even though the definition of the word never changes, how it’s applied can have a huge impact on how our renders look if we don’t know what it means in a particular context.
Do colors either look washed out or darker than they should sometimes? Is it hard to get a pleasing even distribution in a gradient? Does nudging a particular slider one way sometimes have a huge effect and the other way not appear to do anything at all?
It’s probably a linear vs. non-linear thing.
Let’s take some time out of our busy schedules to really dive in and explore this concept in the most high-level, understandable way possible so we can better know when to check that “linear” box (just kidding, it’s nowhere near that easy).
Definition
Merriam Webster’s first definition of Linear is this:
- of, relating to, resembling, or having a graph that is a line and especially a straight line.
When used in math, physics, and Octane, it means that there’s a constant relationship between two sets of values.
A classic example of this is an object’s speed. If a car is moving at exactly 54 kph for one hour, it means there’s a constant relationship between its distance and time for that period. Within any second that we choose over the course of that hour, that car will have traveled 15 meters. If we double the speed to 108 kph (whee!), then in any given second the car will have gone 30 meters. Halve the rate, and it’s 7.5, etc. If we plot this out on a graph, it looks like a straight line. This is a linear relationship.
If the driver is distracted and scrolling through Instagram, or other drivers are rubbernecking an accident on the other side of the road, the car can’t move at a constant rate. Sometimes it might be going 100 kph, sometimes 22.3 kph, sometimes it’s stopped. Over the course of that hour, we can’t predict how many meters it may have traveled in any particular second because the speed was erratic - we’d have to look at each slice of time individually to know how far it went. If we plot this relationship between time and distance, it doesn’t look like a straight line. This is a non-linear relationship.
What does this mean for us Octane users though?
Part II
Context
Light Calculations
Visible light is the subset of electromagnetic radiation that we can perceive. Physically, light calculations work in a linear fashion. Light moves at a set speed in a vacuum (like a billion kph or so), which creates a mathematical constant that we can put other numbers up against and create predictable calculations.
Octane is more or less a visible light simulator. It does all its physical calculations in a spectral and linear fashion so that it can get the most true-to-reality results possible.
Our Brains
Our brains are wired in such a way as to perceive light in a non-linear fashion. We are much better at differentiating shadows than brighter values, and this bias gets even stronger the darker our surroundings are. It ended up this way because things hiding in the shadows at night have a better chance of murdering us than things waltzing around in broad daylight, so we got to live longer if we could pick out dangers in the dark.
Let’s imagine we were in a room with a single light source that was immune to all the other factors that get in the way of these thought experiments (like resistance or particulate in the air or whatever). If we started out with our magical light consuming 10 watts of power, and then increased it to 20 watts, we’d end up with a pretty substantial difference as to what we can see in the room. If we then moved it to 150 watts (after we give ourselves a chance to adjust), and then raised it to 160 watts, we wouldn’t notice as large of a jump in perceived brightness (if any). Physically, in both cases we added 10 watts, but perceptually, the difference was.. drumroll please… night and day.
Our Displays
Older display tech (CRT) worked in a way where if we sent it linear signals, the image appeared dark and muddy. To compensate for this, we had to send signals modified in a non-linear way (mostly by boosting the darks) which counteracted the natural properties of the CRT tech and “corrected” the resulting image.
Modern tech does not have the same limitations as CRTs, but in order to keep compatible during the changeover, our displays were designed to accept the same non-linear signals that were already being used. They were then equipped with baked-in transfer functions (sRGB being the most common one) that convert the non-linear signals to linear ones that the actual light-emitting tech is expecting to work properly.
Fortunately – as we’ll see later – non-linear signals have a huge advantage when it comes to efficiency and transfer speed, so this actually ended up working out in our favor.
Where We’re At So Far
Light is considered physically linear. Physical calculations performed by the Octane Engine that simulate light are done using linear values to mimic reality as best as possible. The values are converted to non-linear signals and sent to the display. The display internally converts the non-linear signals to linear signals so the diodes emit light properly. Our brains then perceive light received from the display in a non-linear way.
We can start to see why this is confusing, and we haven’t even opened the app yet.
In this chain, we don’t really have to worry about the light calculations themselves (thankfully) - that’s what Octane does. We also don’t have to worry about how our monitor converts signals once we buy the correct one and set it up right. It’s the whole middle part that’s of concern to us.
Our Part in This (The Really Important Thing®)
The reason this guide is ~10,000 words instead of 100 – or simply not needed at all – is that when we’re working with a render engine, we’re dealing with two different methods of handling and displaying data.
- There’s a method that uses a linear relationship that our CGI/VFX software works with to perform physically accurate calculations…
- …and there’s a method that uses one of several non-linear relationships that are optimized for our brains to consume in various conditions.
Unfortunately, we have to deal with both methods while using the software, and if we use the wrong one in any given situation, things look and act weird, and 3D goes from a joyous flow state to abject hell while we’re mashing buttons, flinging sliders, and cursing the poor developers in twelve different languages.
Our task is to learn which parts of the system use which methods, and make sure we’re feeding the correct type of data in so our results come out predictable and pretty.
Part III
Under The Hood
Nerd Alert 🤓
This section is a bit more technical than the rest of this already technical guide. Great effort was kept to keep the language as approachable as possible, but it still may require a few readthroughs to fully get (it certainly took several writethroughs to get it to this stage). Unfortunately a lot of these concepts don’t illustrate well, so we’ll have to do more reading here than looking at pretty pictures.
While it’s not strictly necessary to understand the level of detail presented in this part of the guide, once we get it, it really does help give us the instincts we need to pick the right option in a lot of cases without having to think about it too hard.
Here goes:
Data Sets
A data set is... uh... a set of data :D. It’s a bucket of values that’s either temporarily stored in RAM/VRAM, or permanently stored on disk or in the cloud or something. We’re going to use this term here so we’re not having to constantly read stuff like “or stored data in RAM, or stuff on disk, or or or..” Any data that lives in one place for any amount of time is a data set.
These can be further broken down into two loose categories:
Larger data sets are needed by a render engine or post production app to perform extremely precise calculations. Larger data sets are also, well, large - they take up a lot of space and can be cumbersome. This is fine if our systems can handle it and there’s a reason to keep all the data, but overkill if not.
Examples of larger data sets are all the values for light rays emitted from a particular source, or high accuracy AOV data files intended for post production.
Smaller data sets are better for cases where we want the set to be as lightweight as possible so it packs down into reasonably-sized chunks that are faster to stream over the internet and take up less permanent storage space. Smaller data sets suffer from the drawbacks of being small – most notably that they can’t possibly contain as much data as large ones – so every bit has to count and sacrifices have to be made for the sake of efficiency.
Examples of smaller data sets are images or videos being pushed to a monitor, or a media file stored on our computer or phone that we’re just going to use to look at or share with others.
Data Encoding
Both larger and smaller data sets need to be contained somehow. Unless we’re talking about files (jpeg, mp4, etc.), we don’t ever have to think about the actual container itself, but we do get exposed to terms like “encoding”, “integers”, and “floats” over the course of our render engine adventures, so it’s good to get a high-level understanding of what they are.
Encoding
There are several different ways a number can be represented. 100,000 can be typed out with all the zeroes like that, or written out as “one hundred thousand”, or shortened to 100k, or 1x10⁵, etc. When we talk about the act of converting values into a representation that allows a computer or device to store and transmit a data set, we use the term “encoding”.
Fortunately, there are only two types of encoding that we run across in the graphics world:
Integers
Integers (ints) are whole numbers like 0, 16, 255, etc. This is a straightforward, fast, and easy encoding scheme for a computer (and user) to work with, and it’s great for smaller data sets.
A standard issue JPEG, for instance, is capable of storing data for several million pixels, each of which displays one of about 16 million color values. That sounds like a lot, but to a GPU it’s nothing. What makes this type of encoding particularly easy is that it’s always using the same scale and amount of accuracy; zero to 16 million, all in single unit increments.
In the UI, we mostly run into integers when picking colors - we’ll see a 0-255 scale in the R, G, and B channels to get an 8-bit color for instance. 256x256x256 = ~16M.
Floating point values
Floating point values (floats) are numbers represented in a very particular way that allows for massive changes in scale and super fine precision. Floats also make operating on enormous quantities of those bulky numbers a lot more efficient.
When a computer sees a float, it’s looking at a cluster of 16 or 32 ones and zeroes. Digits in particular locations tell it the sign, scale, and accuracy of a value very quickly. It looks like this: 00111111011110110100000000000000, which isn’t super helpful to us humans, but may be interesting to know for trivia night.
When we see floats in the UI of our 3D apps, they look like numbers with a decimal point on (typically) a zero-to-one, or negative one-to-one scale like 0.98, 0.1, or -0.882. Usually what we interact with is nowhere near as precise as the computer can handle, but how many of us have ever cared whether a gray value was 0.059987 or 0.059988? We’re simple meatsacks, so three digits will usually suffice.
So why is this level of accuracy needed? When trying to simulate real-world physics, not enough data or slightly off calculations can be the difference between a render being believable or not, even if we can’t put a finger on why. The engine needs to be able to work with trillions or quadrillions of light rays that can cover millimeters or kilometers of distance and know down to the nanometer which wavelengths it’s working with. All of that super fine level of simulation is done so that when it does show us something, it’s as realistic as possible and we accept it as plausible. Most of the pro graphics and audio tools we use will have an under-the-hood 32-bit or even 64-bit float precision engine at the core to get this kind of accuracy.
Why aren’t all data sets stored as floats if they’re so great? All that extra conversion, processing, and complexity eats up resources, so for simple calculations, all this fancy float encoding would get in the way and probably even slow us down. We only see efficiency gains of floats in larger data sets when integers get too unwieldy.
If we just see the term “float” by itself, it usually means 32-bit. “Half float” is sometimes used to indicate 16-bit floating point values, and it’s used to save some space for when the full 32-bit “full” floats are overkill (mostly in the compositing world).
Linear/Non-linear Encoding
How each value is represented is only part of the encoding scheme. Another part is which values are stored.
As mentioned (and can probably be guessed), larger data sets produced by Octane are large. We’re talking trillions, quadrillions, or more values. When this many values can be efficiently represented as floats, they can all be kept in a large data set (in memory or as an EXR). None of them need to be discarded.
If all the values are available to store, the easiest, fastest, and best way to do this is in a linear fashion. Each one can be directly addressed without the software having to run it through some algorithm to “unpack” it. This should make more sense in a minute here.
In 3D, data sets typically start out as larger, and then are pared down to fit in smaller containers so we can more easily distribute and display them, thereby creating smaller data sets. This paring down means that data is destroyed. It has to be - there’s no current way to nondestructively reduce the larger set to get the file sizes we need.
So how do we decide what to keep and what to trash?
If we were new to this whole paring down thing, we might be tempted to just keep every 10th, or 100th, or 1000th value and trash everything else to reduce the data using a linear method. The problem is, as we now know, that our brains perceive light in a non-linear fashion, and can discern changes in shadows better than brights. So from an overall ratio standpoint if we did a linear reduction, we’d be losing more perceptually-important data (stuff our brains care about). If we were using this method and wanted to keep our brains happy, we’d have to keep way more data than we’d actually need in order to retain enough of “the good stuff”, and that’d be inefficient.
Instead, we use a non-linear method of reducing the data set that’s more in line with what our brains want. It keeps more of the perceptually-important stuff on the low end of the scale and scraps more of the perceptually-irrelevant values on the higher end that we can just as soon do without. This effectively compresses a file in a way that’s optimized for our viewing pleasure while keeping the smaller data set as small as possible.
Important: The actual data itself is not linear or non-linear, the way in which it’s stored in the set is. That’s why great pains have been taken in this guide not to say “linear data” or “non-linear” data, as easy as that would have been. It’s not accurate. The values are the values.
Linear <-> Non-Linear Conversion
Let’s take a look at what happens when we convert between linear and non-linear encoded data. Technically this is referring to a “transfer function” which we’ll touch on more in the Color Management section.
When we start out with linear-encoded data created by the render engine, every value that was generated in the process is maintained in a large data set.
When we encode this data non-linearly, everything that’s not perceptually important is lost in the process of building the new smaller data set. As a result, we get a smaller, more efficient set for display.
Important: If we store this new non-linear smaller data set in a file like a JPEG, all of the data lost during the conversion is lost for good. When we feed this back into the system (say, use the JPEG as a texture in a material), then Octane needs to linearize it prior to rendering so that it plays nicely with the other data needed to do its calculations.
Linearizing doesn’t do any further damage, but there’s no getting back the deleted stuff from the initial conversion without returning to the original “source” file or having to re-render if we don’t have those. What it’s doing is simply putting the data that’s there into a format that can be more easily calculated. If data is missing, Octane either guesses at it or ignores it.
Now, this isn’t a dealbreaker with textures meant for the Albedo/Color/Base Color channel of our materials, after all, what we’re feeding in is still “the good stuff” as far as our brains are concerned. We do, however, lose some flexibility on how much it can be modified (how much light can be blasted at them, how much we can alter the hue, saturation, or brightness, etc) before those textures fall apart because the rest of the data we’d need to get accurate results for those calcs isn’t there.
Where it really hurts us is when we’re working with data channels in materials like Specular, Metallic, etc. If Octane doesn’t have all the data it needs in these channels, and/or the data lost is skewed more toward one end of the scale than the other (especially in Normal and Displacement), then the calcs are off, and we start getting crunchy artifacts and other visual issues. The same problem also occurs with compositing (AOVs) which is why we want to keep those files encoded in a linear fashion as long as possible.
What uses what?
Floats are used when encoding larger data sets in a linear fashion (render calculations and AOVs, mainly).
Integers are used when encoding smaller data sets in a non-linear fashion (data meant for display on screens or easily transportable files like jpegs and mp4s).
In Files, the divider line right now is 16-bits (per channel). Anything under 16-bit will generally use integers and be encoded in a non-linear fashion (there are exceptions, but not worth pointing out here). 16-bit files can either be encoded using integers (non-linear) or floats (linear). 32-bit files use floats and are always encoded in a linear fashion.
Takeaways
Larger data sets (16- and 32-bit) are used for render calculations and source files (like data AOVs) meant for further modification in another app where a high level of precision and maintaining all available data is important. This much data at this level of precision is most appropriately encoded in a linear fashion using floating point numbers (floats) to represent the values.
Smaller data sets (10- and 12-bit, sometimes up to 16-bit) are encoded in a non-linear fashion and stored as integers. These are used either for display files for higher bit depth displays (“HDR” TVs, film, and the like), or as intermediate files meant for modification in post or textures in cases where the extreme precision isn’t needed or available (color grading for example).
Small data sets (8-bit) are used for final display files on lower bit depth displays, and only contain just enough data to look good as-is and shouldn’t be further modified if at all possible. These are also encoded in a non-linear fashion using integers.
Converting larger data sets to smaller data sets is destructive, and the data lost when reducing files can cause artifacts and other issues if we try to use them in precision calculations again. Conversion is a necessary and expected part of the chain for final display files though, so these smaller files still look good to us as long as we don’t try to modify them too much.
Linearizing smaller data sets is necessary if non-linear-encoded files are used in render calculations (like a jpeg albedo texture map, for instance). All the data that’s there is maintained, but data lost during the original conversion can’t be recovered. While often fine for visual maps like albedo, loss of data becomes a problem with data maps like normal and displacement, or with AOVs.
Part IV
“Linear” In Color Management
One of the places the terms show up most are in color management, which is unfortunately a big, ugly, hairy mess right now with a lot of conflicting information being thrown around. Understanding what “linear” means in this context gets us a long way toward making sense of it all.
There are two other guides that go into a whole lot more detail about it. The Color Spaces Overview talks about why things are the way they are, and The Color Management for Octane guide shows how to implement those findings in Octane.
Very quickly, a color space is a set of parameters attached to a data set that defines how it handles and represents color. These are typically divided into two buckets: Working color spaces and display color spaces.
Working Color Spaces
Working color spaces are meant for larger data sets. As we learned in the last section, this means 16- or 32-bit (per channel) data that’s encoded in a linear fashion using floats.
Larger data sets in the 3D pipeline are used when the engine is generating super accurate render calculations, and for storing files that contain the super accurate results of these calculations that we’re going to use to archive, or further modify and process down the line (via compositing/post).
Linear BT.709, ACEScg, and ACES2065-1 are examples of common working color spaces we use in 3D. Because of the linear encoding and lack of data loss, converting files from one working color space to another is relatively easy to do and maintains fidelity.
Linear BT.709 (Also sometimes called “Linear sRGB”) is VERY DIFFERENT from sRGB or Rec.709. All three share the same gamut and white point, but because of the encoding spec (16- or 32-bit floats) and linear transfer function, it’s still capable of storing and addressing all of the original values created by render calculations (it’s not “limited” compared to something like ACEScg). This means Linear BT.709 still allows us to produce the same beautiful results as ACEScg or any other working color space, assuming we do the processing step to compress it to a smaller data set properly. It’s still a perfectly viable option for modern workflows.
Fun fact: Octane is a spectral render engine - meaning it works with and processes light spectra values instead of RGB ones - so it does not have a working color space (per se). It still works with large data sets, and the spectral values it produces can then be converted to RGB values in any supported working color space when exporting EXRs.
Display Color Spaces
Display color spaces are meant for smaller data sets. Again, as we learned in the last section, this means 8-, 10- or 12-bit (per channel) data that’s encoded in a non-linear fashion using integers. These display color spaces are typically meant for files used for display, and not further modification, especially with 8-bit data sets.
sRGB, Rec.709, DCI-P3, Display P3, and AdobeRGB are all common display color spaces meant for different targets (computer monitors, cinema screens, print, etc.). Because of the loss of data inherent in non-linear encoding, converting between display color spaces can often lead to issues. It’s best to go back to the original high bit depth source data and re-export that to a new non-linear data set using the new desired display color space.
ACES – as a system – does not have its own display color space. It has methods of generating lower bit depth data sets from its working space files (ACEScg, usually) that use common (standardized) display color spaces like sRGB. Some of these specifications lead to processing the files in such a way that they give them “that ACES look”. This is important to know so that we don’t get confused and set an Image Texture node in Octane to one of the ACES options when our file is really just sRGB.
Takeaways
-
Working color spaces are intended for high bit depth “source” files encoded in a linear fashion using floats. RGB render engines also use these for calculations, but Octane uses spectral values instead and can convert them and save EXRs out in any supported (common) working color space.
(e.g. ACEScg, ACES2065-1, scene linear BT.709, others via OCIO).
-
Display color spaces are typically intended for images and videos we’re viewing, and storing lower bit depth files encoded in a non-linear fashion using integers.
(e.g. sRGB, rec709, DCI-P3, Display P3, AdobeRGB).
In File Types
Different file formats are meant for different purposes. Here are the more common ones that are used in 3D:
Still Formats
OpenEXR is the current gold standard for storing source files. This format was designed specifically for high bit depth (16- or 32- bit per channel) linear-encoded data using floats, and is therefore typically used with files that utilize working color spaces (ACEScg, Linear sRGB, etc).
EXR also supports different render passes/AOVs, has excellent compression options (PIZ ftw), and other VFX workflow-friendly features. On top of all that, it’s a free, open standard. It technically can store non-linear-encoded data using integers, but because EXRs aren’t supported in most players, it’s not a good use of the file type.
TIFF is another standard format that’s good at storing both high bit depth, linear-encoded and Log! data using floats and lower bit depth, non-linear-encoded data using integers. It therefore supports most working and display color spaces.
TIFF doesn’t have some of the fancy VFX features of EXR so it’s not used as much in the 3D world anymore, but it’s still widely used in the film and video industry, (especially with log encoding), and also in printing thanks to CMYK support (which we’re absolutely not going to cover here.
JPEG and PNG are meant for display. They only support lower bit depth, non-linear-encoded data using integers. These are the main standards for web, mobile apps, and client proofs. These typically use display color spaces like sRGB.
JPEG is currently the most common option for client proofs and renders meant for the web and other places where an 8-bit format is preferred or needed, and an alpha channel is not required. It’s ok for material channels like Albedo, Color, Basecolor, or Diffuse where what we see in the texture directly translates to what we see on screen. Because of the smaller data set, we don’t have as much flexibility when it comes to hitting it with super hot lights or trying to modify the textures, but it’s usually fine under realistic lighting conditions. JPEG is not so great for data channels like Specular, Metallic, Bump, and especially bad for Normal and Displacement because of the lossy compression it uses.
PNG is best used in cases where a very basic alpha channel is needed, such as a web site, presentation, or mobile app. It’s worth noting here that the type of alpha PNG uses makes it a bad choice for VFX/CGI compositing software. If we’re going to send files to post, EXR or TIFF is the way to do it.
PNG can be forced to store linear-encoded integer data which we see in metallic/specular/bump/etc. textures in sets online. While common, the format wasn’t designed to do this (and isn’t very efficient at it), and often the files are just as large or even larger than a comparable EXR or TIFF. It’s best to avoid PNG in texture sets where possible.
Video formats
Video containers like MOV, MP4, MKV, and AVI (regardless of codec) can only encode values as integers, and therefore only handle display color spaces. Video containers and codecs that support 10- or 12-bit data are good enough for color grading as long as the footage is log encoded (again, out of scope for this guide), but aren’t good for things like data AOV passes (depth maps, etc) where 16- or 32-bit values yield more accurate results.
Important: Generally speaking, it’s a bad idea to render video formats straight out of a render engine for a number of reasons, but it mainly comes down to safety (in case of a crash or render errors in part of the sequence), and retention of quality (video files can only manage lower bit depths and therefore lose data).
If we’re planning on taking our files to post and/or want to archive source renders, an image sequence is the way to do this: 16-bit linear-encoded EXR or Log-encoded TIFF for composited beauty passes to be graded later, or 32-bit data AOVs. These sequences are then recombined in a compositor/editor like Premiere Pro, Fusion, Davinci, etc, and further processed and finished from there.
If we’re just trying to get a quick animatic to pass around, a 16-bit TIFF sequence, or even a JPEG sequence (if quality and color retention really doesn’t matter) can be dropped straight into an app like Handbrake or Shutter Encoder and quickly turned into an MP4.
If the entirety of a low quality animatic sequence renders in a few minutes, then it’s probably ok to roll the dice and go direct out to MP4.
Key takeaways
Use EXR for linear-encoded files where the highest quality and data retention is needed for data AOV passes or things like displacement maps. It’s also good for “original source” files used for archival purposes, or as intermediate files to convert into movies or JPEGs to send around.
Use TIFF for lower bit depth, non-linear-encoded files meant for applications where higher (but not the highest) quality output is needed. This can either be final files, high-quality texture maps, or 16-bit linear or log-encoded source files that will be further processed in post. TIFF is also a better option than PNG if we need an alpha channel for compositing.
Use JPEG, MP4, etc. for non-linear-encoded “final output” files in places where 8-bit data is sufficient or required (direct to web, mobile, client proofs, etc).
Use PNG for non-linear-encoded “final output” files where a basic alpha channel is needed, like for web, mobile, presentations, etc (not for compositing in an editor). If an alpha channel is not needed, use JPEG (or something like WebP or whatever the CMS is using).
Video formats are all non-linear-encoded.
Don’t render directly to a video format unless it’s a throwaway file meant for a quick animation check that takes very little time to render. Instead, use a TIFF or even JPEG sequence and compile it in something like Media Encoder/Handbrake to convert that into MP4 for clients or log-encoded intermediate MXF/MOV files for video editing (if needed).
In Displays
As of 2025 (and probably for the foreseeable future unless we harness alien technology or something), computer monitors, phones, TVs, etc. can only display lower bit depth data (8 or 10 bits per channel as of this writing).
As we learned a few million words ago, displays expect non-linear signals sent to them in a way that uses a display color space they’re expecting.
This matches up well with our files, because the small data sets we’re using are already encoded in a non-linear fashion (they had to be to keep the files small but still retain the perceptually-important data).
Therefore, when we have a file that uses a display color space (say sRGB), it can become a non-linear signal that’s sent to our monitor. If the monitor supports sRGB, that means it’s equipped to take that signal, properly linearize it, feed it to its diodes, and blast light out to our eyes that looks good to us.
Important: If we try to directly display linear-encoded data without re-encoding it first, it ends up looking super dark. This happens because the display is expecting non-linear-encoded values, so when it goes to convert the linear values using its non-linear transfer function, it “pushes the values down” on the scale, giving that signature fml look.
Key takeaways
A display expects non-linear signals using a display color space that it supports.
Our software needs to be set up to feed our displays properly encoded data, or it’s going to look like crap and cause hours of troubleshooting.
Part V
“Linear” In Octane
Overview
Now that we have a pretty good idea of what the terms mean, let’s have a look at where they show up in Octane. The two areas where this impacts us the most are picking colors/values, and importing textures. We’ll also dig a little into the confusion that arises when it comes to remapping.
The diagram above shows an overview of the Octane workflow.
First, we load in linear- and/or non-linear-encoded textures into our materials and other objects. We also pick individual values for our colors using either a linear value picker or non-linear color picker.
Then we hit render
Linearizing Revisited
At this point, all non-linear values, either in textures or picked colors are converted to linear values (linearized) before any calculations can occur.
Important: As we learned in the first part of this guide, linearizing doesn’t change the actual values (the red we picked still looks like the red we picked, it’s not washed out or dark or anything), it just re-encodes them in an appropriate way to do physical calculations with.
So picked colors are handled automatically, which is nice (assuming our color settings are set right), but where we can really run into problems is in our image textures if they’re not set up right.
If our image already is linear-encoded, but the Image Texture (RGB Image) node is telling Octane that the image is sRGB (which it does by default), it’s going to run the already-linear values through the wrong transfer function, and it’s going to cause errors in our data channels.
The Rest of the Process
These linear-converted (linearized) values are combined with other linear-encoded data that we feed in, plus the linear-encoded data created by the engine (lighting, etc) to build a final linear-encoded data set (a render).
From there, it goes through a series of conversions via matching functions, and gets conformed to our displays via a non-linear transfer function. That’s how we see what we’re doing in the Live Viewer.
We can then export linear- or non-linear-encoded files that we learned about in the last part of this guide.
In Color/Value Pickers
As our host apps are dragged kicking and screaming into the modern super fun wide gamut/higher peak luminance world, they’re having to change up their color pickers to try to keep up with all the new standards emerging.
We’re not going to go into all that here – it’s just something to be aware of – we’re just going to explore the higher-level differences between linear and non-linear pickers.
Non-linear Color Pickers
Traditionally in the computer graphics world, we’ve dealt with choosing colors using non-linear color pickers based on the sRGB/rec.709 color space. In the UI, these pickers use color models like RGB, Hex, and HSV that work in integers, degrees, and percentages to be more human-friendly.
The values generated by these pickers are generally designed to match up with a standardized sRGB monitor (though this is changing due to our shift to more capable displays). If we pick H:0, S:0, V:50% using our HSV sliders, we expect to see a 50% gray that looks like it’s smack in the middle of what we perceive as a “brightness” scale to our non-linear brains, or perceptually 50% on an sRGB display. If we nudge the slider toward the dark end, it works in even, predictable steps and it looks like it’s getting darker at a rate we expect. Same goes for the brighter end of the scale. It’s easy for us humans to work this way.
Knowing what we now know about values being linearized prior to any calculations, When Octane is getting ready to render, it needs to understand that our picker is speaking in terms of sRGB, which means that a value that looks to us like a 50% gray in the sRGB world actually needs to be converted to something more like 0.21 on a linear luminance scale for correct physical calculations prior to rendering.
Again, there’s no visual difference between sRGB 50% and 0.21 on the linear luminance scale - both look like middle gray to us - it’s just a matter of how the value is encoded.
Assuming we used relatively flat, plausible lighting, when our “50%” gray is converted to a linear 0.21 luminance value, processed, and displayed on our sRGB monitor, our non-linear vision system (biased brains) look at it and say “yep, that’s the same 50% gray I picked, so all good in the hood”.
So non-linear pickers are great for choosing perceptually pleasing colors, but what happens when we try to use one to adjust a physically-calculated linear property like specular reflection?
If we set our HSV model to V:50%, we’d expect it to be about half of the max glossiness allowed, but when our non-linear 50% gets linearized, it turns into ~0.21, and as a result, our material is only about 20% of the max. It looks really dull and we’re sitting there scratching our heads. We could either keep nudging the slider and try to eyeball it, or we can just take the easy route and use a linear value picker.
Floating Point Value Pickers
Floating Point value pickers have scales that show us values from 0-1 (most of the time). Floats are virtually infinite, but in order to keep the UI sane, Octane puts in stops at the top of each scale because an infinite slider would be tough to work with (and stupidly high values don’t gain us anything anyway).
Important: A linear value generator produces values on an absolute scale that is independent of color spaces and does not cater to our biased brains. This means the values will be correct for physical calculations, and be harder to predict when changing visual values until we learn how to work with it.
Let’s say we want to see a middle gray in our albedo channel. If we used the default Octane RGB picker (which is linear), we might be tempted to put 0.5 in all three channels and call it good.
It’s not good though. The value we picked is 0.5 on the linear scale, which does not cater to our visual biases by doing any kind of conversion. 0.5 is a linear 0.5, not an sRGB 50%, so when that goes into the Albedo channel, and then through the whole pipeline and comes out the other end on our monitor, it appears washed out to us, because it’s closer to a perceptual 73% brightness value in sRGB-land.
If we want what appears to us like a middle gray, we need the linear value to be more like 0.21 as we saw in the last example. If we’re using a linear picker for our color values, we have the opposite problem we had in the last section. When we go to make our visual color darker or lighter – because of the conversion – we need smaller nudges in the dark end of the scale to see a difference than we do in the brighter end of the scale, and that’s super annoying until we get used to it.
The flipside is that if we want our specular property at 50% of its max, we load up a linear picker, type 0.5, and it’s exactly 50% of the max. As we move the slider in either direction, we get expected results and the material becomes more or less glossy in even intervals. No conversion, no heartburn.
Picking visual colors on a linear scale
Now, we can opt to pick visual colors using linear sliders, and this is actually a viable and preferred workflow in production environments where there are different color space targets, different displays, etc.
If we pick our colors using a linear picker, they’ll be the same regardless of where they’re used, and they won’t fall prey to conversion errors either between different systems and color spaces, or even internally in Octane before it goes to render.
The disadvantage is that it’s a little more challenging to work this way at first and has a learning curve (a non-linear one, ha. ha.) coming from the 2D graphics world, but depending on where we want to end up as a 3D artist, it may be worth it.
Takeaways
We always want to pick our mathy values (lights, values meant for shader calculations, etc.) using a linear scale. Picking these on a non-linear scale is frustrating because it’s hard to predict what we’re going to get.
If we’re relatively new to 3D from a 2D workflow or in a situation where we’re solo artists or on small teams, we’ll probably want to start out picking our visual colors (“I want this material to be green”) using a non-linear picker. Non-linear pickers cater to how we perceive color, so the values look more evenly distributed, even if they’re physically not. If we want what looks to us like a middle gray, we move the V slider to 50%, and presto.
If we’re in a larger production environment, or if it ends up just making sense to us to do so, we can also pick our visual colors using a linear scale, we just need to re-think how we move sliders around and re-adjust our muscle memory and expectations. We get the same colors visually, the sliders are just in different places that don’t immediately make sense to us at first (0.21 instead of 50%, etc.)
Whether or not we pick from a linear or non-linear scale doesn’t alter the outcome – Octane will convert the values we pick on a non-linear scale into ones the engine is expecting –, it’s just easier or harder for us (or our teammates) to work with depending on the situation we’re in.
In Material Channels
Material channels have built-in controls to change their attributes. These are sometimes linear or non-linear. At this point we can probably guess, but let’s go over it anyway.
Linear Controls
In most Octane materials (Diffuse, Glossy, Metallic, Universal), there’s a linear 0-1 slider that controls how much contribution that channel has to the entire material. This is great in channels like Metallic, Roughness, Coating, or others where we want to know that if we set it to 0.1, we’re getting 10% of the max contribution, or 0.85 will mean we’re getting 85%. If we want it half as shiny or rough, we just halve the value. Cool.
In Octane Standalone, it’s just a 0-1 slider next to the channel’s name. In C4D it’s sometimes called “Float”, because it’s basically an internal linear float-to-grayscale texture that’s driving it. Other DCCs probably implement this differently.
Non-linear Controls
There are also channels - notably Albedo/Color/Diffuse, Specular, and Transmission - where we’re going to want to manually pick a color sometimes. If we switched our color picker to the C4D-native one (or Octane’s HSV picker), we’re now looking at non-linear values.
This is great for the Albedo/Color/Diffuse channel. We pick colors on a scale that makes sense to our eyes and there’s a nice perceptual distribution of values. Cool.
If we had kept the default Octane RGB picker, we’re picking our colors on a linear scale, which works, but isn’t as intuitive as a non-linear one.
There is also a linear contribution float slider beneath the RGB picker in the C4D implementation of Octane. This gets completely overridden by the RGB picker which in turn is completely overridden by the texture field. Most of the time it’s best to leave it at zero.
Where Things Get Ugly
Important: In most Octane material types (everything except Standard Surface as of this writing), altering a channel’s color ALSO affects the channel’s contribution.
This is especially difficult to cope with in the Specular and Transmission channels.
Let’s take glass as an example. If we have our color picker set to an HSV model and pick H:180, S:100, V:100, we get a bright blue, very translucent glass. The Value slider (V in HSV) controls how perceptually dark the color is, and also the contribution of the transmission channel itself.
If we reduce the Value slider, it darkens the glass, but also makes the material less translucent. That makes sense if we think about it - sunglasses block more light than reading glasses, and the lens appears darker to us.
The thing is that we’re working on a non-linear scale now, so if we were to take our V slider from 100% to 50%, we’d expect the glass to block half the light. Nope. It’s actually now blocking about 80% of the light because of the conversion from non-linear values to linear ones for the light calculations.
Most of the time we can just wing it and find a value that looks right, but we just have to know that smaller adjustments at the bottom end of the scale have a much larger visual effect than larger adjustments at the top end.
If we’re using a linear picker for the color, then the contribution portion makes more sense, but the tinting of the material is more difficult to tweak and pinpoint unless we’re used to working this way.
Standard Surface
The Standard Surface material is unique in Octane in that it has a separate linear contribution control which is called weight.
Unlike the other material types, weight is not overridden by the other controls, so we can pick our color using the non-linear picker, and then choose the contribution using the linear weight slider. Both things make sense. That’s one of the key advantages of Standard Surface.
If we get the same blue glass going in a Standard Surface material and keep the color at 180/100/100, but reduce the (linear) weight slider, we can see a more perceptually even transition as we lower it without having to noodle with the color value every step.
Image Textures
When we feed image textures into material channels, we also need to know whether they’re encoded in a linear or non-linear fashion. In this case, knowing that is not enough, though - it’s much more important that we know which color space each texture uses so we can tell Octane so that it can handle it properly.
“Handling it properly” in this context means linearizing the data if it’s not already linear-encoded, and that’s a crucial step in the process to make sure our materials look the way we intended.
Important: This is one of the reasons why in Cinema 4D, we need to use an Image Texture node (called RGB Image in Standalone and other DCCs) and not C4D’s native Bitmap shader. Octane will properly convert textures, the C4D shader won’t.
Most texture sets include textures meant to be piped into the Albedo, Color, Base Color, or Diffuse channels, and those files typically non-linear-encoded JPEGs or PNGs (or better like TIFF if we’re lucky). When we load these into an Image Texture (C4D) or RGB Image (Standalone or some other DCCs) node, we need to make sure the color space is set to sRGB unless we know for sure that they’re using a different color space.
Textures meant to be piped into the other channels (Roughness, Specular, Metallic, Normal, etc.) are typically linear-encoded. On a good day, they’re EXR or TIFF, but more often than not they’re linear-encoded PNGs which aren’t as good (especially in displacement). Either way, these should always be set to non-color data in the Image texture node to indicate that they’re linear-encoded, and that the “color info” is used as calcuation data (e.g. surface normal directions), and not actual color values.
If we run across textures that were encoded using an ACES color space (these would be EXRs or possibly TIFFs that we know are using ACEScg, not JPEGs or PNGs which are more than likely sRGB), we can set the Image Texture node to that instead.
We’ll more commonly encounter actual ACES files as a freelancer or solo artist if we’re using Octane to do compositing and are bringing in EXRs from other apps. If we’re in a studio using an ACES pipeline, we may well see ACES textures for our materials.
Important: The default setting of Linear sRGB + legacy gamma should never be used. It’s super confusing. Display color spaces have their own transfer functions, and shouldn’t be approximated with “legacy gamma”, so we need to pick the correct color space (usually sRGB or non-color data).
Side note: There’s no functional difference between the non-color data option and Linear sRGB + legacy option gamma IF the gamma is set to 1, but it’s easier to remember to just set the dropdown to non-color data and avoid the whole gamma PTSD.
In fact, when it comes to color spaces in general, it’s best if we just forget the word “gamma” ever existed, and it will simplify our lives and decision trees quite a bit :).
Texture Generator Nodes
All generator nodes in Octane begin life generating linear-encoded data.
Some nodes like Tripper, Wave Pattern, Wood Grain, etc. were meant to be piped into the Albedo/Color/Diffuse channel and therefore show us non-linear controls to choose colors (if our color pickers are set that way). This is similar to how it works in the material channel interface.
Other Octane nodes like Float Texture (“float to grayscale” in Standalone), or gaussian spectrum are meant for calculations or remapping, so they use linear value sliders.
Nodes intended for the Normal channel like Flakes and Color Squares are meant to output linear RGB values, because the Normal channel is using these values as straight-up data to control how the shader normals are being altered. They are not meant as visual patterns for the Albedo channel, so they don’t offer user-friendly RGB controls for picking colors.
That said, if we want to make a 1960’s-era bread bag, we can run the flakes texture into an Octane Gradient (Gradient Map), ramp the scale way up, and recolor it to suit our needs.
There are also a few cases like the Gradient Generator (NOT the Octane Gradient/Gradient Map) or Color Correction node that have a Gamma slider. These nodes start out generating linear data, but then the internal “gamma” function transforms that into non-linear data before outputting it.
So yeah, we have to talk about Gamma for a minute 😡
Gamma
Even though it’d be better if this term fell completely out of our collective consciousness, it still shows up everywhere, including in Octane in some places, so we still need to address it.
When it shows up in our software, it’s essentially a function for remapping values. It basically boosts the lows (darker values), but in a simple way that we don’t really get much control over aside from “how much”.
Important: A “gamma” option should not be used in color management because color spaces work with transfer functions specifically created for them instead. In Octane, we mainly run across this in the Image Texture / RGB Image node where it defaults to “Linear sRGB + legacy gamma”. Again, don’t use this. Our monitors are expecting sRGB signals, so we want to use sRGB to make sure it looks right.
That said, in nodes like Gradient Generator or Color Correction, the “gamma” slider can be used as a quick way to make an image or a grayscale gradient more visually punchy. It still won’t produce a perceptually even scale of grays if that’s our goal because it’ll be processed by our monitor’s transfer function incorrectly, but it can be used artistically to just add a little “pop” to our textures when we don’t have better options (or time to employ them).
Gamma=1 is the same as linear. If we’re in a situation where a “gamma” slider is present and we have the option to set it to 1, we can do so to produce linear-encoded data, or “turn off” the effect.
Gamma=2.2 is pretty close-ish to the standard sRGB transfer function, so it will make the gradient in a Gradient Generator node look relatively even, but still not “right” (we’ll explore this more in the next section).
Otherwise, higher gamma values create a more contrasty effect, and lower ones produce a less contrasty effect if we’re using it in something like a Color Correction node.
In post, we get a lot better control by using curves, but that’s the topic of a whole other guide.
Takeaways
All values are linearized prior to rendering
Non-linear-encoded textures can be used in materials (mostly in the Albedo/Diffuse/Base Color channel), but Octane needs to know which color space they were using so it can properly linearize them and we end up seeing the result we were expecting.
Linear-encoded textures should be used for data channels like Roughness, Metallic, Normal, etc. Octane needs to know that they’re already linear-encoded so it doesn’t try to convert them. To do this, we have to set the Image Texture (RGB Image in Standalone) node’s color space dropdown to non-color data. We may end up coming across texture sets comprised of all JPEGs which don’t support linear encoding, in which case we have to set these to sRGB, but if there’s any way we can go back to the source and get them as linear-encoded files, we’d be a lot better off.
Don’t use “gamma” in a color space context.
Annoyingly, In Gradients
As if this wasn’t confusing enough, “linear” also appears in both the Gradient Generator and Octane Gradient/Gradient Map, and of course mean different things in those two different nodes too. Fun times!
Gradient Generator Node
“Linear” in the Gradient Generator means the gradient will just go left-to-right in a straight manner (we can use a Transform node to make it go in other directions, but it’s always a single linear direction). The other options are radial, angular, polygonal, or spiral patterns.
The gray value scale is covered under “Gamma” here. Most of the time it’s best to change this to 1 which “turns it off”. We’ll explore why soon.
Octane Gradient (Gradient map) Node
“Linear” In the Octane Gradient/Gradient Map refers to the knot interpolation. It means there will be a smooth blend of values between gradient knots. The other options are Constant (step), and then Smooth Step and Hermite which are just different smooth blending interpolations.
There’s also a Linear button in the Octane Gradient in C4D which attaches a Saw Wave node. This was needed in earlier versions of Octane, but can safely be ignored now that we have a Gradient Generator node which is a much more user-friendly way of feeding visual color data into the Octane Gradient (remapping) node.
How this node remaps values is found in the interpolation cspace dropdown: Physical (linear 🙄), or Perceptual (non-linear).
All this will hopefully make more sense after reading the next section.
A Quick, Practical Guide to Visual Gradients
The whole gradient debacle is going to need its own guide, but very quickly here:
If we want a black-to-white gradient to appear in the Albedo channel, we need to use a Gradient Generator node. This starts out producing values on a linear scale, but then re-interprets the values on a non-linear one by way of the Gamma slider. This isn’t a great way of doing it, but it’s quick and easy (since it’s the default behavior or the node) and might be good enough.
If we want a black-to-white gradient to appear in the Albedo channel and want it to best match up to our display, we use Gradient Generator and set its Gamma to 1 (therefore making it put out linear values). We then run the Gradient Generator into an Octane Gradient node (or Gradient Map in other DCCs), and set the Gradient Map’s Interpolation cspace c-space (or color space), correct? I think a dash is unintentionally missing.option to Perceptual. This will remap the values to a non-linear scale that lines up better with our monitor’s transfer function.
We can see the difference in these two methods in the second and third row of the illustration above. Gamma 2.2 in the gradient generator leans a bit more on scooting darker values toward the left, while the Perceptual in the gradient map node method does a better job of spreading the grays evenly from end to end.
We can then select the Interpolation (smoothness between knots) from the dropdown in the Gradient Map (Linear, Smooth Step, and Hermite) to fine-tune the look.
As we can see above, Smooth Step is a little more contrasty overall. Hermite is close to Linear, but does a little better job evening out the grays on the dark end. At this point, it’s a matter of taste and what we’re using the texture for.
If we want color in our gradient, we should set the Gradient Generator to Gamma=1 (output linear values), then set the Gradient Map to Perceptual.
As we can see above, if we feed non-linear values (Gamma=2.2) into the Octane Gradient, it will remap them to colors using a linear scale and it will appear incorrect. Also if we feed linear values in and leave the remapping linear (Physical), it’ll be different bad.
Similar to the gray example above, the trick here is to let the Octane Gradient do the non-linearizing of the values as it’s recoloring them. That’s done by making sure the Gradient Generator is at Gamma=1, and setting the Gradient map’s Interpolation cspace to Perceptual. Like the grayscale gradient in the last section, this will match up with what our monitor is expecting and give a perceptually smooth value ramp.
Then we can choose which of the knot interpolations look the best using the interpolation dropdown. In this example above, that’s either Linear or Hermite (Hermite smooths out the sharp transition around 50%, so that’d probably be the winner). Smooth Step gives kind of a fluting look that may not be desirable here.
Yes, it’s confusing, but it should be a lot less so now that we understand what’s going on under the hood and what type of scales our values are on.
Wrap Up
If you made it this far, first off, congratulations :) You should have a better understanding of what the terms “linear” and “non-linear” mean and how they’re applied throughout Octane. It takes some practice, but eventually starts to make sense after thinking through various workflows as you’re building them out.