Study Series
Version 1.0, Updated February 2026 using Octane 2026.2 and Cinema 4D 2026.3
~11,000 words, average read time: 60-90 min
About this guide
This is the first in a series of studies that takes many of the concepts learned in other guides and puts them to practical use. In this guide we’re going to look at a lot of the factors that make something look photorealistic. This guide strives to be DCC - or Digital Content Creation (app) - agnostic. C4D was used when making it, but the concepts should transfer over to any flavor of Octane you may be using (Blender, Houdini, etc.)
Downloads
Everything created using this guide can be 💾 found here
This guide is also available in 📄 PDF format here
Introduction
This isn’t a walkthrough. It looks like a walkthrough and quacks like a walkthrough, but it’s a lot more than that: It’s a critical thinking training tool.
We’re not here to reproduce a photo.
We’re here to critically analyze a photo, learn what to look for and which questions to ask, and then think about how we might reproduce the conditions that make the photo look the way it does. Once we have that under our belts, we can use this information to make future renders more plausible.
Investing our Time Wisely
Because we’re not here to reproduce this photo, we shouldn’t be wasting time tweaking the camera angle and composition and shape of the objects to line them up exactly. 80% is good enough - we want the light to kind of reflect the same way, the shadows to sort of fall the same way, and the materials to be believable enough to pass the sniff test, but not stress over whether the striations in the wood line up exactly the way they do in the photo. or whether the elephants are perfectly sized to one another or anything like that.
Overall Tenets
Let’s get a framework going here that we can take with us on this journey.
Tenet #1: Real World Values
We’re making a realistic scene, so we want physically plausible values for everything. This includes lighting values, size of our objects, and material and camera properties. Octane is a physics simulator - the more realistic the values are that we feed into it, the more realistic the output is going to be.
Tenet #2: Avoid Conflation
Conflation is when two or more things combine to affect our results. What we’re after here is setting as many things as possible in a predictable way so that when we change one component, we can see what it does on its own. What we want to avoid is working in conditions where external factors like crazy reflections, distant light sources, nearby objects, tone mapping, off-kilter render settings, or other things are influencing our decisions about material properties or lighting values, only to have to redo them under final conditions.
Tenet #3: Perfection is the Enemy
When things are too perfect looking, they break the suspension of disbelief and they look “off” to us. If reality is the goal, we want to introduce imperfection wherever possible, even if it’s just a small amount. At first blush, this may seem counter to Tenet #1, but when was the last time you saw a mathematically perfect real-world object? Great pains could be taken to get close, but something will always be just a smidge off.
Using Generative AI Intelligently
As of this writing (and probably for the near future as people get used to the idea and learn how to integrate it), AI is a hot button topic, especially in our industry. It’s not going away, and if it’s used right, it can save us hours of time while we’re still in the driver’s seat.
In this exercise, we’re only going to be using text-based generative AI models (LLMs like ChatGPT, Claude, Gemini, etc) in this guide to help us research things we otherwise may not know. We’re not going to use image generators to make textures, AI tools to alter our renders, anything else that takes control away from us.
Important: AI isn’t needed for this exercise, and if you have a strong visceral reaction to it and don’t want to use it, nothing in this guide will change if you just ignore those paragraphs. We’re using it here as a helper tool like Google search, not a replacement for creativity.
Most LLMs have global settings that allow you to change how it interacts with you. Most of them are set to “confident, lying sycophant that apologizes profusely when challenged” by default because that’s what companies think people want (maybe they’re right, who knows).
We can give it instructions to be straightforward, give us confidence ratings on answers, say “I don’t know” when we’re asking something outside of its training that it can’t otherwise find, and not change its answer if challenged. This will save us a lot of time going down wrong paths because it’ll no longer be afraid of hurting our feelings. We can also tell it to take on a professional tone so it’s less irritating while we’re learning complex stuff like rendering.
When Should We Use AI?
We need to think about what exactly it is we’re asking for.
If we upload our source photo to ChatGPT or another AI that accepts an image as an input, we can ask it any number of things that might be helpful:
- What are the objects made out of?
- How large are the objects?
- Which colors should I use in the albedo channel of my materials?
- What’s the lighting like?
And then we can dig even deeper:
- What type of wood is the small elephant made from?
- What are the dark markings on the smaller elephant?
- How was the elephant likely made?
- What are some possible camera settings?
What it’s good at
Some questions are directly related to the image itself and we don’t really even need AI for them - we can ask for a color palette, or we can use an eye dropper in an image editor and get color values.
Some of these questions have pretty definitive answers because there are a lot of points of reference on the Internet and they all more or less agree with each other. The odds of the wood on the smaller elephant being beech are very good, since there are tons of photos of closeups of beechwood patterns. Most AIs probably have been trained to recognize patterns for things like wood types. We can do a quick image search to find photos and verify.
What it’s okay at
Some questions can be a little more arbitrary: There isn’t much to go on for scale in the scene, but AI can figure out that we’re looking at children’s toys, find similar ones, and hazard a good guess as to how large they are. It may not be accurate if it can’t find this exact toy set, but it’s probably not going to guess something like 2 meters. If it comes back that the small elephant is 6-8 cm tall, we can probably safely use that value.
What it sucks at
…And some are very difficult for AI. Lighting is one of those things. Sure, there are some standard “best practice” ratios and whatever, but taken as a whole, there are infinite ways to achieve infinite results that are all extremely subjective.
Documentation on standard setups is pretty good, so we could ask “what kind of lights are used in an average studio setup?” and get a reasonable answer, but “break down the lighting in this scene” is a whole other level of complexity, especially given our propensity to doctor images. Was the photographer using two or three lights? or four? or one and an open window? Were the lights different temperatures, or is there something else in the scene we can’t see giving it a color wash, or was the image recolored in post? Were some of the images the AI trained on actually renders where people used unrealistic lighting? There are some clues here and there, but nothing definitive, and no way for AI to have good information.
So, you know, it makes stuff up, confidently.
Don’t trust, verify
We need to take AI’s guesses (and they are just guesses, no matter how confident it sounds) with a grain of salt. It’s a connection-forming engine, not a database, and not a source of truth, so it doesn’t know anything, but it can help us form connections we may not have arrived at on our own.
Always always always double-check. And with that out of the way, let’s start on what we came here for.
Part I
Analysis
Scene Analysis
The first thing we’re going to do is take a close, objective look at our image and pick out some crucial details that should help us make better decisions when setting up our scene.
The reference photo we’re using is this one found on Pexels, shot by Antoni Shkraba Studio.
Scene Composition
We’ve got a simple still life with a few wooden toys on a very basic background. Some of this may seem obvious, but it really helps to give it a good once-over and point out all the elements we see so we don’t gloss over anything important.
- There are seven nested curved wooden shapes that all have the same thickness. It’s probably a rainbow set like this rather than full rings, since it’d be easier to position them the way they are in the photo. Let’s refer to them as ‘arches’ goes forward.
- The main subject of the photo is a pair of wooden elephant toys that are perched on the tallest and third-tallest ring. They appear to be made out of different kinds of wood, or at very least they’ve been treated differently. The small elephant is the “hero” because it’s in focus, while the rear elephant is slightly out of focus.
- There’s a pinkish wall or backdrop, and what looks to be maybe a blue floor or table that we can barely see between the rings.
- These objects were likely designed to be played with by children, so they probably aren’t very large or heavy. The one in the link above is 14” x 7” x 2” (~35.5 x 17.75 x 5 cm) so that’s a good reference point for sizing all the objects in our scene. We can also ask AI to guess at it and it should come back with similar values.
Environment & Lighting
Let’s have a look at the lighting. Often it helps our analysis to desaturate the image and boost the contrast a bit to call out the highlights and shadows.
- This was likely shot in a studio or controlled environment, since the lighting is overall very even. If we were outside or in a toy store or something, there’d be several different random light sources and the image would have a very different look. We can verify this later by dropping in a busy HDRI to simulate that, but for now all signs point at a studio.
- There’s a strong key light source coming from the high front right (specular highlights on the back of the elephants point to this).
- There’s also a soft fill light on the other side - the fronts of the elephants and rings on that side have weak specular highlights (kind of a soft glow, not strong and sharp like the other side). If we look at the elephant’s feet, we can see shadows on both sides, indicating lights coming from both directions.
- There doesn’t seem to be a rim light per se, but there might be some sort of overhead or ambient light in the room.
Camera
Fortunately, this image came with some metadata embedded, and Pexels (the site we’re getting the image from) displays some of this for us which we can see by clicking the “More Info” button at the bottom next to the Share button.
- It was shot with a Sony A7 Mark III, which is a “full frame” (36x24mm) sensor. That’s great - it means we don’t have to deal with any annoying focal length conversions or changing sensor sizes in Octane.
- The focal length is at 75 mm which is good for portraits and isolating smaller subjects like these toys.
- The aperture is f/7.1 which at 75mm produces a relatively shallow depth of field. The smaller elephant, yellow arch, and most of the orange arch are in focus, and it starts going out of focus at the red and green arches.
- ISO is at 640 which is reasonably clean on this sensor, but if we zoom in, we can see some grit. It does not appear that denoising software was used (which would alter the image), or if so it was done with a very light touch.
- Shutter speed doesn’t matter to us right now because we’re not doing animation.
- It was edited with Lightroom (so it likely had at least a little post work done).
AI can give us a pretty good guess based on the characteristics of the photo. Camera gear is typically a lot easier to reverse engineer than lighting because there are a lot fewer variables. That said, ChatGPT thought this was shot with a 50mm @f/2.8, not a 75mm @f/7.1 (which we know it was from the Pexels metadata). Both of these options would be reasonably close though, it’s not like it’s suggesting a fisheye or super telephoto, but again, it doesn’t know, it’s just guessing.
Materials
It helps a lot here that our source file is very large (6000 px) - this allows us to zoom way in and look at some of the details of the materials.
- The arches are made from a light wood, and the caps are painted or stained in a way where we can still see the end grain pretty clearly. The material is very matte.
- The front elephant appears to be carved from a single block of wood that’s darker than the arches (either due to staining or just a different wood). It’s fairly matte, but has some specular highlighting. The eye looks like it was added after the carving was finished with some sort of wood burning tool or a marker.
- The rear elephant is also carved from a single block of wood, but a much different one than the other objects. The eye here looks like it’s a natural knot in the wood, or plays off one. The finish is a little glossier than the other elephant.
- There’s a warm yellowish cast to the overall image which could be the result of the camera’s white balance setting (or altering of it in post), actual light temperatures, natural color of the materials, and/or something else.
In all the analysis done in this section, AI is probably the most helpful with materials. Right off the bat, it guessed that the front elephant and arches are beech, and the rear one is pine. This checks out after a few searches and comparisons. More than that, though, it can be used to deep dive into learning about basic woodworking concepts and how trees even grow so we can get a better idea of which textures to use to best simulate what we’re seeing. We go into this more in the Procedural Wood Deep Dive guide that’s kind of an optional companion to this one.
Moving on
That’s enough analysis. We’ve studied our source image and picked up on a few things (both concrete and more conceptual) that will help inform our choices going forward. Let’s crack open our DCC and get this show a-rollin’
Part II
Scene Setup
Initial Render Settings
When we’re doing lookdev, our strategy is to get a reasonably accurate image as fast as we can within the confines of our hardware so we can iterate quickly.
Part II of the Custom Defaults Guide has a good set of initial Kernel settings for this, and the Kernel & Render Settings Guide explains what all of them do and why the ones in the Custom Defaults guide were chosen as a starter set.
This particular scene shouldn’t be super heavy since there are no caustics, refractive materials (glass/sss/etc), volumes, or other particularly brutal calcs, but it’s always a good idea to build habits around efficiency.
Important: In this exercise, we’re NOT going to use ACES or AgX processing (“tone mapping”). By using straight-up sRGB, we can train ourselves to stick to Tenet #1 by using low intensity real-world values. It’s going to look great: Real values in a scene like this will not blow out and clip, and we don’t need the extra contrast (or want the color shifting) that’s part and parcel of ACES.
We’re going to want to use the Path Tracing kernel for this since we’re after realism (but not caustics, so no need for the Photons Tracing kernel). If we were to reset the Path Tracing settings to the defaults, we’d only need to tweak a few things:
• Set the Max Samples to 256. This is enough to see results without needlessly running the GPU.
• Leave the GI Clamp at default
• Parallel samples should be 32 (or higher depending on the GPU)
• Adaptive Sampling ON, Noise threshold at 0.02, Expected Exposure at 0.
Set the Output Size
First thing we want to do here is change the pixel dimensions of our render so they’re in the same aspect ratio as the original photo. Fortunately the math is super easy on this one - it’s 4000px wide by 6000px tall, so it’s in a classic 2x3 (or 4x6) aspect ratio. We don’t want to waste render cycles trying to output something that large, so we’ll stick with the 2x3 aspect ratio and make it something more reasonable like 800px wide by 1200px tall. If we’re on a laptop, we can go smaller (keeping the same ratio), or take down the scale in the Live Viewer. If we have some beefy GPU and a large screen with tight resolution, we may want to go a little higher.
Gathering Assets
This isn’t a modeling tutorial, so we’re just going to assume that we already have or made the geometry.
If you don’t have time to build the models (or don’t want to), you can download them here. The models are available in .c4d, .orbx, and .obj formats so they can be used in any DCC or Octane Standalone.
The objects in the files aren’t based on dimensions or sized properly, they’re just eyeballed and they won’t hold up to an A/B test with the photo. It doesn’t matter - that’s not our goal here. They’re 80% there - smooth enough to be plausible, and no malformed polygons or flipped normals or other technical issues that will produce artifacts in the materials, but certainly not professionally optimized. They’re also set up so the axis is centered and facing the right way so that default 3D textures should just work on them without having to rotate them.
Roughing in the Composition
Time to make a rough layout of all our pieces.
Size things appropriately
Thinking back to Tenet #1, we want to make sure our objects are at real world scale.
Let’s put a measuring cube in the scene. This is nothing special, it’s just a normal cube. We want to size it to our largest object (the largest rainbow arch) which we figured would be about 35cm wide after a quick Google search. This will give us a hint about whether our objects are appropriately sized.
Turns out, they’re not. Somebody mixed up radius and diameter in their heads and made the arches 17.5 cm wide instead of 35 cm. Derp. Let’s grab all the Rainbow pieces and scale them up 200% (or just wing it and fit them in the sizing cube in a front view).
It’s hard to tell how large the elephants are at this stage. We can ask AI, and it’ll guess that the small elephant is about 5-8 cm tall. That seems reasonable enough, so let’s make the small elephant 5 cm tall and the large one 8 cm tall. We’ll tweak this later once we get the camera in the scene.
Laying out the Nested Arches
Keeping Tenet #3: “Perfection is the enemy” in the backs of our heads, we want to grab each rainbow piece and arrange them out kind of like they are in the photo.
If we look at the photo closely, we’ll notice that the pieces aren’t perfectly spaced. We can really see this in the gaps between the arches. While this could be a trick of the camera or shoddy manufacturing tolerances, it’s more likely that they’re not spaced out or aligned exactly the same because someone wasn’t sitting there with calipers when setting up their toy scene for a stock photo (lazy!!).
Important: When thinking about why something is a little off, it’s probably because of the human involved in the chain - lean into that :)
Hand-placing the position of our arches (without snapping) and giving each a different little rotation nudge on R.H (probably up to a degree or so in either direction) will give us the randomization we need to make it look like someone arranged the objects instead of a robot. This will help our brains buy the final render just a bit more.
Protip: We’re perching the elephants on the largest and third-largest arch. It’ll be easier to place them if those particular arches don’t rotate, so we can just leave those two straight and nudge the others off-kilter to get the effect we’re after.
It’s worth repeating: We are not trying to match the photo exactly; it doesn’t matter if the gaps are different from the reference image. The point is that our gaps are different from one another. That’s what’s going to make it seem more realistic. If we want to get closer, we can wait until we get the camera set up and then nudge the values.
Adding the Elephants
Next, let’s stick the elephants on the arches.
Important: We want to zoom in and make sure the geometry of the elephant’s feet isn’t intersecting the arch it’s standing on, and that it looks like they’re connecting in a plausible way. Floating or intersecting objects without a reason causes a bit of dissonance in the brain. This is one of those places that precision actually does matter because it’s physics at work here, not human inaccuracy.
Elephant Axes
Precise placement of an object on another object (particularly a curved one) can be tricky because there are a few different ways we need to rotate the model to fit right. This calls for Tenet #2: Avoid Conflation.
First off, the geometry axis of the elephant needs to be dead centered to all of the polygons of the elephant (and facing the right way with Z pointing back) for our 3D wood texture to make sense to us (more on this in the Procedural Wood Deep Dive guide). This is already set up like this in the starter file if you’re using C4D, but if you’re using another DCC or built the assets yourself, it’s important that the axis is centered.
With this axis in the center, it’d be difficult to rotate the elephant and get the feet to line up with the arch, so having a second axis widget at the point of contact will help us a lot. In C4D, we can use the Subdivision Surface object’s axis for this by snapping it to the bottom of the front foot. Now we can grab the SubD and snap that to the arch and have a nice pivot point to seat the rear foot on the arch as well.
Finally, we’re not sure where on the arches the elephants are going to land. Fortunately our arches are halves of perfect circles, so if we place a null (or some sort of parent object) on the floor at the center of where the arch’s full circle would be, we can rotate the null and the elephant will travel along the arch.
The large elephant looks like it’s forward a bit from the center of the arch, and the small elephant looks like it’s back a bit. Let’s try about +5° for the large elephant and -5° for the small one. Great.
Composition
With our scene mostly laid out, we can now get our composition going.
Temporary Environment
Depending on how we have our settings, we may or may not have a “Default light” in the scene. The starter file has a stock Texture Environment to get a flat, even, white light going (either pure white or C4D’s 95% white will do fine). At this stage, adding an HDRI would conflate things because it might produce weird reflections and shadows that will throw us off, and according to Tenet #2, we don’t want to be doing that.
Temporary materials
When we’re using a scene as a reference, one of the most important things is to get the lighting reasonably close. That’s difficult to do if our material properties aren’t at least somewhat close to what the finals would be. A high gloss object looks very different from a very matte object when put under the same light, and having a material that’s very different can be confusing and cause us to go down the wrong path.
We’ll need a material for the large elephant, small elephant, floor, wall, arch sides, and then one for the front-facing portion of each arch (the caps). That’s 12. Either the Universal or Standard Surface material is generally the best choice because they give us the greatest flexibility.
The C4D version of the source files comes with polygon selections for the wood caps so we can add the colors in that way rather than having to break the geometry into separate pieces. Check out the scene files to see how this is done.
Albedo
To quickly get some temp colors for the Albedo channel, we can bring up the photo and use a color picker to grab an average color from each object. This can be done within the material settings in Octane (depending on the DCC), or in an app like Affinity or Photoshop, and then we can just copy the values over to the Albedo channels of each material.
Important: These are temporary base colors. The lighting and material properties have a large impact on what we see, so very often a true albedo of a material is difficult to figure out, especially when looking at a processed photograph with questionable white balance or light colors.
If we’re using an image editor, one trick we can employ is to apply a pretty strong gaussian blur to the entire image prior to picking colors. This will blend the colors of each object together and give a pretty nice average that’s easier to hit with the eyedropper.
So now we have some base colors in, but our materials are either very glossy if we used a Universal Material, or semi-gloss if we used Standard Surface. This doesn’t match up with any of the materials we see in the scene.
Roughness
We’ll do a deeper analysis of the material later, but in the spirit of Tenet #2: Avoid Conflation, let’s make one quick adjustment now to all the materials to make them easy to work with.
Nothing in the scene looks very glossy, so let’s grab all the materials and set the roughness to 1. Obviously far from perfect, but we won’t get reflections distracting us when placing our objects.
Add a Camera
The next thing we want to do is drop in an Octane Camera. Before we do anything else, we want to decide on a starting focal length. Camera and lens properties go a long way toward achieving different looks. This is an entire topic in and of itself, and there’s a whole guide on it: Photographic Concepts for 3D Artists.
We don’t have time to stop this and read that now (but we will get to it later), so let’s just get some quick takeaways from it to help us here.
In this case, we have the luxury of having some metadata from Pexels, so we know the photographer of our reference image used a 75mm focal length.
If we didn’t know that, we could ask AI. Claude guessed 50-85mm, ChatGPT guessed 35-85mm, and Gemini guessed 50-85mm. A photographer would probably guess somewhere around there too.
Anything in that range would work for our purposes, and really we could probably get away with up to 120mm or so - it doesn’t have to be 75mm to get that look, but in this case it’ll help.
If we’re really new to the whole photography concept thing, just knowing to keep within the 35-90mm range (always starting at 50mm) for most renders will help us get more of a realistic look. Anything past that on either end introduces distortion which may break the suspension of disbelief. Once we know how to use that distortion, we can make our renders more interesting, but until then, it’s one more variable we can take out of the equation using a reasonable default.
Anyway, let’s make our focal length 75mm.
Matching the Camera Shot
This guide was built using Cinema 4D, which uses a “Y-up” coordinate system. If you’re using Blender or a different DCC that has Z-up, then mentally swap Y and Z.
This is all about Tenet 2: Avoid Conflation.
For a still life like this, it’s always easiest to start if our models are resting on a ground plane that’s at world zero, and facing in a way that the most interesting side is parallel with the camera plane. The more the whole scene is aligned along one of the world axes, usually the easier it is to frame the shot.
The camera itself should start out straight at world zero, and then back up a bunch away from the objects so it can see them. Let’s zero out all the position, scale, and rotation coordinates and then move the camera back in Z until we can see all the objects.
Looking through the camera, we want to position it so “eye level” is somewhere around the height of the center of the smaller elephant.
From here, we can either wing it and get reasonably close, or if we want to get more precise, we can use the photo as a reference by assigning it as a backplate in the viewport (if our DCC allows for this), or adding it as a texture to a plane and framing the plane up .
If we’re using C4D, we can make a native Background object, and then build a material (even an Octane material) with the photo in the color/albedo channel, assign it to the background, and it’ll show up in the viewport. We can then set the viewport to “Lines” shading so we can see through the geo and get pretty close. We can also use this opportunity to shift our objects around if needed to better line up.
Important: Don’t go crazy here. There’s no way of knowing the exact dimensions of the toys used in the original or how they’re placed in the scene. Without precise measurements, the pieces aren’t going to line up to the reference exactly (we can see above that the curves of the arches aren’t the same, etc.). Remember the point of the exercise: To figure out what makes this appear real to us, not to recreate the photo pixel-perfectly.
Depth of Field
Now is also a good time to set our aperture via the f-stop control. In the C4D plugin, that’s in the Octane Camera tag in the Thin Lens tab. We happen to know that it’s at f/7.1 because Pexels told us.
If we didn’t know, then we could look for the sharpest pixels in the photo and put our focus point there, and then adjust the f/stop until it looks similar to what’s going on in the photo. Usually a photographer tries to focus on the eye of something that has one, otherwise it’s typically the most important part of the subject. Then there’s an area in front of and behind the focal plane that’s sharp, and after that it starts to go out of focus in both directions.
It’s best to start at about f/2.8 or so because that will exaggerate the effect and let us know we have the depth of field effect active. From there, a lower number increases the strength of the effect, and a higher number lessens it. Once we’re in the 5.6-8 range (assuming a 75 mm lens), it should look similar to our reference. The strength would be different if we were using a different focal length since these physical attributes are linked in the real world.
Speaking of the real world, depth of field is one of those things where we can REALLY see the difference if our geometry is at the wrong scale. DoF is calculated using actual distance from the camera to the subject, so if our elephant was 20 km or 20 nm tall, the same f/7.1 would be drastically different even if we were able to frame our camera up the same.
Important: If we want to completely turn the Depth of Field effect off, we need to find the Aperture control and set it to zero. We can’t turn it off using the F-stop control (at least in C4D).
Also Important: Because this isn’t an easy on/off switch, a good approach is to duplicate the camera and set the second one to have a zero aperture (and name it appropriately). Depth of field can cause conflation because it obscures material properties, but we do need to know what our materials look like both in and out of focus, so being able to quickly bounce back and forth is helpful.
Moving on
We now have a good sketch of what our scene looks like. When we go to work on the lighting and materials, we’ll know we have a good starting point and there won’t be any weird shadows or scaling issues when adjusting the settings.
Part III
Lighting
Lighting: First Pass
Reference: Lighting and Emission Guide in Octane
Analysis
Like most product and still life photos, we can safely assume this was shot indoors with controlled lighting. This is often referred to as “studio lighting” or a “studio environment”.
A very common lighting method in a studio environment is a classic 3-light (or 3-point) setup. This involves a key light which is the strongest light source, a fill light which is placed opposite the key light to fill in harsh shadows, and a rim light to add a little extra dimensionality and separate the subject from the background more. Having multiple lights at different intensities like this gives a photo or render a sense of depth and lets us (literally) highlight important parts of the scene.
Looking closely at hotspots, shadows, and reflections will give us clues as to where the lights are placed. We covered this a few thousand words ago, so a refresher is probably needed:
The most obvious thing is that there’s a stronger light source positioned in the upper right-hand side of the scene based on the hotter spots on the tops of the elephants and the darker shadow on the inside of the smaller elephant’s back leg.
There’s also at least one other light source that’s causing the rest of the scene to be fairly evenly lit and countering any harsh shadows.
There isn’t a neutral gray in the scene that we can use to determine the coloring of the lights, and the wood and pink background make it kind of tricky, but it doesn’t appear that there’s any artificially-colored lighting going on (like hot magenta or blue gels we’ll sometimes see). The different light sources also appear to be reasonably close in temperature, and fairly neutral if we really look at the hotspots on the elephant and front left edges of the arches.
Strategy
We want to lean on Tenets 1 and 2 here: Use realistic values and avoid conflation.
Realistic values take a little understanding here, because it’s not straightforward.
Octane lights work on a power/efficiency model. The ins and outs of this are explained in detail in the Lighting and Emission Guide in Octane, so for now, we just want to make sure we understand that the power control is where we’d put our wattage, and efficiency happens in the texture field/pin.
The default texture in the texture field is a float value of 0.7 (70%). This is a good average for an LED, so the power (wattage) should be a lot lower than it would be for an incandescent bulb which is more like 10% efficient.
Studio Lighting Equipment
In the real world, light panels in a studio setup are usually physically larger (but not too much larger) than the subject to produce softer, more pleasing highlights and shadows. If we look at professional studio light setups, we’ll see that the average size of a panel falls between 40cm for smaller subjects (but not macro) and 150cm for larger ones. 200cm+ panels are usually only used for full-body shots of people or large products.
In real studio setups, there are two types of lights that are generally used. There are very high powered strobe lights that only stay on for fractions of a second when fired, and there are lower-powered constant lights that stay on all the time. In 3D, we don’t have to worry about strobes because our virtual sensors do not have the limitations of a real-world sensor, so we’re all about lower-powered constant lights. That’s what we want to search for if we’re looking to get equivalent real-world values for Octane.
AI thinks that constant LED studio lights start around 5 watts for a small handheld one to 100-200 watts for a massive panel. Going to a lighting website or two confirms this.
Tenet #3: Keep it real.
Our lights will probably be in the 20-60 watt range since we’re lighting a relatively small scene. It’s always best to start on the low end and then increase as needed - one of the easiest mistakes to make in 3D is adding way too much light and then relying on tone mapping to sort it out. It will, but at the expense of realism and adding other visual issues that take a lot of time or post to fix. Better to set ourselves up for success early.
Starting from zero
Now let’s look at avoiding conflation. We’re going to want to add lights one at a time to see what their effects are.
Important: Octane requires an environment. In most DCCs, a new scene comes with a default texture environment set to white (or close enough). This will stay active until another environment is added in which overrides it. Just adding an area light or emissive material will not turn it off.
Since we’re going to try to light this whole scene only using area lights, this means we’re not seeing the true effect of each light because the environment is pumping in a diffuse white wash over the whole scene. This will cause us to crank values and make bad decisions. Tenet #2 doesn’t like that one bit.
We have two choices here. The easiest fix is to drop in a new texture environment and set the color to black (or if you’re following along with the files, just change the texture in the provided environment to black). This will override the default environment.
The second is to actually change the properties of the default environment. This is located in different places in different DCCs. In C4D it’s in the Octane Settings > Settings > Env. tab. All we need to do is change that to black and we’re good to go.
Setting this up by default to be black is a good way to work (the Custom Defaults guide goes into detail about this for C4D).
Conflation out of the way and realism on the brain, let’s get lighting.
Key Light
The first light we want is our key light. Area lights (or Quad lights depending on the DCC) are the standard go-to for a studio environment.
When we drop a default area light into C4D, it’s like dropping a stadium light in. It’s far too large at 200 cm x 200 cm (about 6.5 ft), and far too powerful for our indoor scene at 100 watts with 70% efficiency. It’s amplified even more with surface brightness on because of the large size. Let’s get this under control before we end up catching our models on fire (we don’t have time for pyro sims right now).
Our entire scene is about 25 cm high, so we’ll want an area light that’s around 40-50 cm in both dimensions. Let’s start with 40 x 40 cm.
To make this more like a real 40 cm LED panel, we only need ~20 watts (in the Power field) for our key light. We can bring it up or down as needed, but we want to start with a nice low realistic value.
We’re sticking with LED panels for this, so we want to keep the efficiency at ~70%, but replacing the default float texture with a Gaussian Spectrum texture (set to 1/1/0.7) will produce a slightly cleaner render and retain the 70% efficiency we’re after. It’s not a big hassle to do and gives Octane spectral values that it wants without having to convert them from RGB, so we’re just being a little nicer to the engine.
Now that our light is more realistic, we also want to make sure surface brightness is kept ON, otherwise it’s going to blow out our scene. This does mean that we’ll have to adjust the power of our light if we make it larger or smaller.
Now we can place the light. This is where some trial and error comes in. If our DCC supports it, setting up a target object for the light is a good idea. In C4D this involves a target tag and target object (a null is a good choice), in others it’s probably different. We want to light the whole scene, so let’s put our target at world zero. Now we just have to worry about moving the light, not aiming it as well.
Our key light is the brightest one in the scene, and the brightest highlights in the photo are on the backs of the elephants on the right side. It’s also pretty high up, so let’s move the light about 45 degrees to the right from world center and raise it up a bunch until the highlights read about the same on our elephants. We want to make sure the light isn’t too close to the objects or the wall or splashing too much light on the wall.
Fill Light
Let’s duplicate our key light and the target, rename them, and move it to the other side on the bottom left. In most studio lighting setups, the fill light starts at half or a third of the intensity of the fill. Let’s start at half and see what that gets us, so 10 power.
If we look at the reference photo, we’ll see that there aren’t sharp specular highlights on the left side, so we need light to be softer. This is achieved by a combination of making it larger, further away from the object, and/or less powerful.
The lighting is very even around the left side with not noticeable hotspots, so making the panel larger is probably the right move. Let’s try 60x60 cm.
Because we have surface brightness on, this means we’re essentially adding more LEDs to the panel, which means more light output. We can counter this by reducing the power, or by pulling the panel away. Let’s move the panel back a bunch, and we can reduce the power later if we need to.
Overhead Light
We may or may not need this, but let’s set it up anyway. An overhead light puts another wash over the whole scene, further softening and evening it out. We want to have a VERY light touch with this though - let’s duplicate our fill light, bring it up overhead (but still in front a bit) and reduce the power to 1.
This just kind of rounds out the lighting a little bit and adds a smidge more realism to the scene. It should be more apparent when we develop our materials and they start catching the light in more interesting ways.
Where we’re at
Our scene is a little overexposed (too bright), but once we start in on our materials, we’ll probably have to make some adjustments anyway. The important part is that the lighting is now in the same ballpark as the photo’s.
Part IV
Materials
We’re at kind of a crossroads in this guide. Building and refining a complex material like wood is a fascinating project all on its own. There are a lot of steps involved and things to understand, all of which would easily double the length of this guide.
For that reason, the material building portion of this was split out into its own guide called Procedural Wood in Octane: A Deep Dive
If you’d like, you can pause this one and jump over to that one and learn how the wood was built, and then pick up where you left off here with your own materials, or if you’re pressed for time or just not interested in material building and would rather just skip that step, you can download the materials (.c4d or .orbx file) made during the course of writing this guide here, apply them, and continue on.
All three wood materials use the same setup based on the beech wood that’s being used on the small elephant, but they’ve been modified in various ways for each object.
Important: Large, multi-node materials (especially ones with heavier effects like UVW distortion) can be cumbersome. Depending on the hardware (and DCC) used, this may or may not cause some lag in pre-processing before Octane starts rendering. These materials can eventually be baked, but while we’re doing lookdev we’ll likely want to go back and alter parts of them, so it’s best to keep them procedural for now.
When we apply the materials, the sides of the arches and elephants appear correct, but the caps don’t. Let’s look at why this is.
Coloring the arch caps
We have seven arches which are all made out of the same material (and probably even cut from the same block of wood in real life). The difference is that the caps are all unique colors.
There are a few ways to approach this depending on the DCC, but what we’re after is a way to keep the same base material, but assign different colors to each face (which are polygon selections in our C4D model).
In the most recent version of the C4D plugin as of this writing (Octane 2026 1.8.4), we can use custom user data tags which means we can have one single material for all of the arches, which is very cool.
This material utilizes a 3D wood texture (so the caps and sides look the way real carved wood does), triplanar projection so that only the front faces are stained with the different colors, and user data so we can assign colors to each cap.
Steps for C4D:
The material is already set up with an Attribute Texture node feeding into the composite material texture. If we open the texture in the node editor, we’ll see that the Parameter Name is “archcolor”.
Important: We can name the parameter whatever we want, but we just need to make sure that it matches the User Data attribute’s Name field exactly (not the shortname).
- Make and place a User Data tag on the first arch - this can either be done by hitting Shift-C and searching for User Data, or going into the Tags menu and finding it under Programming Tags
- Select the User Data tag and in the User Data menu in the Attributes Manager, choose Add User Data…
- In the popup, put in “archcolor” (or if you changed the name in the material, put that name here instead). Make sure both the Name and Shortname update with that.
- Change the Data Type to Color
- Either pick a default color or leave it black - it doesn’t matter, we’re going to be changing them in the next step anyway.
- Hit OK
- In the User Data tab of the User Data tag, change archcolor to #FF210E or whatever red seems closest.
- Duplicate the tag to every arch, and change the rest of the colors. The colors picked in part II were: #FF210E, #E9500C, #FF9A06, #5E7A09, #72939C, #03758F, and #9D3D4E
- Hit the Octane button (send scene and restart render) to see the updated colors in the arches.
Light temperatures
The reference photo still has a yellowish cast to it that we’re not achieving, even when tweaking our material colors. This likely comes from the temperature of the lights that were used, so we can try changing ours - one at a time (tenet #2) - so see if we can get closer.
Setting the key light at 5500 K, fill light at 4500 K, and keeping the overhead at 6500 K gets us reasonably close. Studio photographers do tend to use different temperature lights to get a bit more depth in their photos, so it wouldn’t be unreasonable to do this in our render.
This is feeling more in line with the reference photo. If we want to get closer, we may have to adjust the material colors again (especially the back wall), but at this point we should wait until we do all the rest of our tweaks first in case we get another color shift.
Where we’re at
80% down, 80% to go. We have a solid foundation now, and all that’s left is tweaking things until they look right.
Part V
Tweaking Towards Realism
Setting Ourselves Up For Success
All three of our tenets are employed in this part.
Up until now, we’ve been using real-world values in our lights and materials, but we can also apply Tenet #1 to how we’re going to actually _view_ our render. At this stage we want to set our resolution to the size it’s going to be when we deliver it. Not only is it a waste of processing power and time to do lookdev in a much higher resolution than we’re going to actually need, but the act of scaling the pixels down will alter the look of the final piece, which is exactly what Tenet #2 wants us to avoid. We’ll look at Tenet #3 in a moment.
To start out, we want to turn off depth of field in the camera (or use the no DoF camera we set up earlier) so we’re not having a difficult time seeing the materials and making bad decisions based on that.
Let’s just say our target resolution here is 1280x1920 because it fits nicely on a 4K screen.
We already set the resolution where we want it, but we want to make sure we’re not seeing a scaled version in the Live Viewer. This is what the Lock Resolution button is for at the top of the LV window (looks like a lock). It makes sure that Octane is only rendering the final pixels at 1:1 scale. Let’s also make sure both numbers to the right of the kernel dropdown (set to PT) show as 1. This means we’re not changing the resolution or zooming.
Custom Layout
For this part, building a custom layout is pretty helpful. We’re going to want a lot of real estate for the live viewer and the node editor, and then make sure we have access to the object manager, materials manager and attribute manager (so we have two places we can make quick adjustments to the material). Your DCC will probably differ a bunch if you’re not using C4D, but it probably allows for custom layouts.
Other Lookdev Tools
Setting a render region up around the area we’re working in is a great way to quickly iterate on one particular area without having to render every pixel. If you’re fortunate enough to have an A/B comparison tool in your plugin like C4D’s (in the Compare menu of the Live Viewer), this is a great way to test subtle changes and make sure you’re improving things.
All In on Tenet #3
Perfection is the Enemy
Now that we have our basic materials in, our render looks pretty good, but it’s lacking that realistic feel we’re after. This is mostly because our materials are too perfect looking, so it’s causing dissonance in our brains.
Each thing that we do in this chapter is going to be very subtle, but combined, it’ll create enough varied imperfection to look natural.
If you have a bunch of scratch or grunge maps at your disposal, this is the step in the process where they’re really useful. The downside is that they’re 2D textures, so the mapping can become an issue (particularly if the UVs aren’t good on the models), but triplanar mapping can be an effective way to get around this.
In this guide, we’re going to come at it from a purely procedural standpoint and rely heavily on different noises instead of baked maps.
Important: When using a 3D texture in Octane like the various Noise nodes, we want to make sure our projection is set to XYZ to UVW. This is especially true in C4D where the default projections for most textures are MeshUV.
The Big Elephant
Let’s start with the elephants, since their materials are going to be easier to alter than the arches. We’re not going to go into every setting here (if you’re interested in those, you can poke around in the source files). This is really to get you thinking about the steps needed to make things look more realistic.
Important: The source photo is very contrasty and a little oversaturated, both are which are set in post. Right now we’re looking for material properties (how is the light interacting with it?), so it’d be wasted time if we were adjusting base colors to try to match the style of the photo. We’ll get to that in the next chapter.
Bring on the Noise
The largest difference we can make here is adding a noise overlay on top of our texture. That will vary it up and make it more like natural wood. The example above uses an Octane Noise set to Turbulence, shrunk to 0.25, and recolored to have some brown in it to match the wood. It’s set to the Overlay blend mode and ~0.5 opacity in the composite texture layer node to visually blend it into the base texture. We don’t want to be too heavy-handed with distressing the textures because the objects in the scene are relatively clean - we’re just looking for some variation to make it feel more natural.
Specular/Roughness
Now that our lighting is better set up, we can see that the elephant has kind of a varnish over the top that’s a bit more reflective than what we currently have. Switching from our 1 float value to a slightly yellowish #FFFAE0 for the specular color will make it a little more yellow. In C4D, If we’re using a color in a data channel like Specular, the strength of the effect is determined by the Value (V in HSV) of the color, and this overrides the built-in float value. In other DCCs it might be different depending on whether there are built-in controls or not. Dialing back the roughness to 0.5 will make it a little glossier to match the reference.
The Small Elephant
We’re going to do the same thing here and apply a large noise across the whole thing to make it more organic. The small elephant’s wood is also a bit rougher and more natural, so adding some bump here will help sell it.
This bump needs to be SUPER subtle, and we’ll need to experiment with where to even take the bump from. In the illustration above, it’s a combination of the medullary rays and the rings, but not the grain, and both have been dialed WAY back using gradient maps so they’re almost not even there. That tiny bit makes a big difference though, and the texture no longer looks flat.
Arches
We have a few issues going on here. Probably the most common problem in 3D when it comes to geometry and realism is this: Nothing in the real world has perfectly sharp edges or corners. When light hits a corner or an edge of something, there’s always at least a tiny bit of wear or bevel that catches the light just so. When we don’t have that in our 3D models, it breaks the suspension of disbelief.
We can very quickly do this in Cinema 4D by putting a Bevel Deformer object in the null with all the arches, set it to a 0.2 cm offset and 2 subdivisions (so it’s round, and not chiseled), and because we have a 3D texture for the wood, it won’t produce any problems on the bevels with the rays not matching up.
Just like there’s no such thing as a perfectly sharp piece of geometry, there’s also no such thing as a perfectly sharp stain or paint line. The bevel helped us out a little bit, but since we’re set up with Triplanar projection to stain the fronts of the arches, we can just change the blend angle to 60° in the triplanar node, and that will soften the transition between color and natural wood.
The last thing we’re going to do is put a large noise over the top of the whole thing to add some variation just like we did with the elephants.
The Wall
The wall in the background is mostly going to be blurred out by the DoF, but it still needs just a tiny bit of something so it’s not a perfectly flat color. Even if the wall itself is painted an even color, putting a bit of bump to simulate the orange peel texture found on most interior walls will send signals to our brain that it’s real, don’t fixate on it.
We don’t need to spend a lot of time on this, just tossing a Buya noise into the bump channel, setting the global scale to something like 10%, and inverting it will do it. We don’t even need to adjust the color - the variation in the bump will take care of that for us.
Important: This is something we should adjust while DoF is on, since we may have to go a little stronger than we would without it just to see something.
Where we’re at
Our values are all real and we’ve satisfied all our tenets, but we’re just not quite at the look we’re after yet. There’s one more secret ingredient missing…
Part VI
Fix it in Post
This is the “photo” part of photorealism. We don’t use cameras to perfectly capture every aspect of a scene (we can’t if we tried), we use them to tell a story and call attention to things.
The kicker of this whole thing is that we spent all this time making sure all our values are realistic, and our monitors can’t even display enough information to make an image appear to our brains as really truly real.
In the photography world, even if our very expensive camera sensor has a high dynamic range and our very expensive lens is manufactured to be as pure, neutral, and undistorted as possible, we’re still limited by the number of values our screens (or photo printers/inks) can display. It’s up to us to steer our post processing techniques to produce something that gets the most important info across.
This concept is also true in the render engine realm. Octane does a truly amazing job of simulating light physics to get the most accurate representation of realism possible… BUT… we still can’t see it all on an 8-bit (or even 10-bit) display.
After Octane hands us a massive set of data (32-bit, linear encoded), we have two main sets of tools at our disposal to mold it into our final image:
“Tone Mapping”
The first tool is what’s commonly known as “tone mapping” (this term is problematic, but it’s all we have right now to communicate this concept). We set up our initial scene to have a straight sRGB transfer function. This means any value that’s not able to be displayed by an 8-bit sRGB display (which is what most of our audience is using), just gets clipped off and discarded.
This is a pretty ruthless and unforgiving way of handling the data, but we were very careful in our setup to keep our lights and colors at physically plausible low-enough intensity settings so that we wouldn’t produce many (if any) values that would get clipped. This possibly limited us a little, but the thing we were trying to reproduce was just a simple scene in normal lighting - we weren’t trying to look into a laser in a pitch black room or anything. As a result, straight sRGB (or no tone mapping) worked out fine.
If we had more extreme values, we could have used something like ACES, AgX, or Octane’s new Smooth tone mapping to corral the out-of-range values that straight sRGB would have discarded, and instead shift them back into the sRGB gamut so that we didn’t get blown highlights or other artifacts. This shifting makes picking colors more difficult though, because they - y’know - shift. :)
Tone mapping is generally more of a blunt force approach that affects everything all at once. The time to have done that was when we were setting up our lighting so we could adjust our light values and colors accordingly. This is especially true with ACES which is extremely aggressive when shifting colors, and it adds extra contrast (which we can see in the illustration above).
If we were to switch now, we’d have to go back and adjust every light and material to compensate. There’s no reason to, so we’re just not going to.
General Post Processing
The second set of tools are things like contrast, saturation, levels, and all that. These tools also shift values, but in a FAR more subtle, adjustable, and targeted way that doesn’t affect the overall intent of the image unless we push them too far. This is where we can nudge the knobs to “sweeten” the image a bit and make up for the lack of displayable information by calling attention to the important areas. As a result, if handled with a light touch, it appears to our brains as more real, or at very least more photographic.
Apps like Nuke, Fusion, and After Effects specialize in post, but Octane has a pretty robust set of post tools that we can use without having to re-render and round trip. This requires the Output AOV system which we’ll quickly set up now.
Setting up AOVs
This differs from DCC to DCC. In C4D, we need to:
- Open C4D’s Render Settings (Ctl-B/Cmd-B)
- Go into the Octane Renderer section on the left (make sure Octane is the active render engine at the top if you don’t see this)
- Go to the Output AOV Compositor tab, hit the Add Output AOV button
- Open the node editor with the Node Editor button two over from the Add Output AOV button.
- Select the Output AOV Group node, and in the attribute manager on the right, hit the little pencil icon button (edit) next to the red X, choose Render Output AOV, and then pick Beauty
- Select the Effects layer node, hit the little pencil icon in the Layer 1 area, add a Adjust Saturation node
- Go back to the Effects node, Add another layer, hit the little pencil icon in the Layer 2 section and then add a Sharpen node to it.
From here we can play with these two effects to make it just a little punchier - this helps catch the brain’s attention, but if we go too far it will read as unrealistic, so we have to toe that line carefully.
120% Saturation and 0.25 Sharpen seemed to get us pretty close and give it just the little punch it needed. Some post nodes should be applied prior to tone mapping (where the values are still linear), and some (labeled “SDR-only”) should only be applied after because they’re designed to work with non-linear values (“linear” is complicated - there’s a whole guide on it). Post is a whole other topic, but it’s worth playing around with some different ones and seeing what you can come up with.
Final Tweaks
Our post effects will shift the colors a bit, so we’ll probably want to go in and tweak material colors, noise patterns, and other things to either try to get closer to the photo or make something we like better.
Important: Tenet #2: Avoiding conflation is the key here - we want to find something we don’t like, narrow it down to the one or two settings that’s causing the dissonance, and tweak just that. We’re working with complex materials here. If the wood is too dark, it might be the base colors, but it also might be the noise overlay or the medullary rays, or maybe we just added too much contrast in post. We can disable and re-enable things as needed to narrow it down so we’re not overcompensating with a different setting rather than fixing the actual problem.
In the illustration above, the colors in the user data tags were revisited, the noise patterns were mucked with a little, and a few other values were shifted around to try to get just a little closer to the reference photo.
Wrap Up
If you made it this far, congratulations! - this was a long one :) You should now have a pretty good understanding of how to critically analyze a photo (with or without AI’s help) and have an idea of some of the important factors that go into making a render appear photorealistic.