Requests from a lighting artist

Heyo,

Still brand new to the engine but couldn’t wait to get my mitts on it to try out. I have a few requests and while I am capable of implementing some of this stuff myself in C plugins, it’ll take me a bit to get up to speed with the API, so I figured it would be useful to share feedback in the meantime.

Light units

It’s in EV right now for both analytic and directional lights, which is a logarithmic scale. I confirmed by constraining my in-camera exposure to 16 and matching it with a “16” intensity directional light.

It would be nice to have a way to adjust lights linearly as well. Most real-time engines seem to default to linear units, most offline renderers offer both logarithmic (stop-based) and linear lighting adjustments.

Also, having a physical basis for the units would be nice. You already have EV100 and I see in the code you have gone some ways to making your exposure physically plausible, so it’s only a small jump to getting photometric or at least radiometric units in the engine. This way I can set lights like I would in the real world and validate them in-engine.

Speaking of validation…

Lighting debug

I’m sure you’re aware, but the lighting debug modes are very bare right now. Again, nothing a plugin can’t fix, but just putting it out there. The EV100 visualization mode isn’t much use without at least a readout of scene-linear RGB values at the center of the screen or at my mouse cursor. The histogram is similarly hampered by a general lack of readability although I can kinda see what it’s going for.

On top of those, I would love to see a proper waveform monitor, a color scope (measuring saturation), and a false color mode which shows areas of critical over and under-exposure.

Exposure

Is pre-exposure implemented? I seem to get lighting anomalies at anything approaching EV16 or so intensity lights, and I assume this is hitting the wall of typical floating point range. It’s common to pre-expose lights based on the current engine-exposure to get around this out-of-range problem.

Camera

Like the light units, I would love having some physical basis for camera controls. Having actual focal length and image sensor size setups, at least. In addition, tying all the post-process stack together with a physical camera and having the physical camera controls actually drive post-process like DoF, lens flare, etc., I feel is a modern approach rather than “gamey” arbitrary controls for DoF and so on.

Colorspace

Having more transparency around what tonemapper and what colorspace is being used is critical for having workable solutions for colorgrading - specifically LUT-based HDR colorgrading. Some more info here would be great.

Thanks for listening! I’ll continue to dig in and hopefully contribute in some way.

4 Likes

Thanks for very valuable input. I agree with all your points and we will be working towards adding the missing stuff:

  • Toggle between linear and EV/Stop based light intensity
  • Possibility to specify light intensity in physical basis (and Kelvin for color)
  • Improve the EV100 visualization mode with readout
  • Proper grading of the X axis in the histogram. If there’s more stuff you feel is missing in the histogram view, please let us know.
  • Waveform monitor and color scope would be nice. Doesn’t feel too hard to add either.
  • False color mode – I’m not 100% sure what this is but I assume its basically just high-lighting over and under exposed areas somehow…
  • Pre-exposure sounds like a nice addition, would need to investigate what the best strategies are here as I haven’t implemented it earlier. If you have any reference documentation it would be much appreciated.
  • Physically based camera parameters
  • Better control over the tone mapper. Right now we are using a curve fitted approximation of ACES. See post_processing/composite.shader. If you have suggestions of what you’d like to see, let us know.

Again, awesome feedback, thanks so much for taking the time writing it down. If there’s anything you feel I missed in the bullet point list above. Please let me know.

Cheers,
– tobiasp

1 Like

False color mode – I’m not 100% sure what this is but I assume its basically just high-lighting over and under exposed areas somehow…

Yeah, this is exactly it. This link has more details on the common post-process visualizations for color-grading. It’s not super concretely-defined and varies between camera manufacturers, but the idea is to show areas approach or at under/over exposure at a glance. And usually 18% (middle) gray.

The reason the middle gray thing is useful is that often, when setting exposure, jamming a macbeth chart in there and just tuning so that middle gray shows up as true middle gray is a super fast way to dial in a scene quickly.

Pre-exposure sounds like a nice addition, would need to investigate what the best strategies are here as I haven’t implemented it earlier. If you have any reference documentation it would be much appreciated.

This documentation provided by Google Filament should elucidate nicely. The whole lighting section is a joy to read, really.

Better control over the tone mapper. Right now we are using a curve fitted approximation of ACES. See post_processing/composite.shader. If you have suggestions of what you’d like to see, let us know.

In general, I try to keep the tonemapper static (as it is now) and rely on having powerful grading tools to get the look I’m after. Having a static tonemapper has a lot of benefits that will become apparent below.

HDR colorspaces and color-grading are a huge topic but I’m happy to discuss it a bit. I really like ACES because it’s a known standard and if you implement it properly, you can automatically support a lot of different HDR standards while keeping everything in the same colorspace.

Using the ACES RRT/ODT for tonemapping and device output seems to be a nice way to do it. ACES also has a whole ingestion part of it’s pipeline - the IDT - but this probably isn’t as useful or necessary for a game engine since the benefits of bringing everything into ACES gamut during rendering is minimal compared to the benefits that the end of their pipeline bring. That said, rendering in ACEScg gamut allows for more saturation over RGB linear.

OpenColor.io is a great library that can manage the colorspace transforms and ACES curently supports it. A huge amount of VFX software also supports which means you get automatic compatibility with the engine if you choose to support it.

For the colorgrading itself, you can either keep all controls in-engine or provide an ability to upload a LUT in HDR space. The trick if you go with LUT approach is that:

  • The engine needs to output an image to be color-corrected in pre-tonemap HDR space
  • The colorist needs to know what tonemapper the engine is using so they can recreate it in their colorgrading app to see the final picture (here is where conforming to ACES RRT/ODT comes in handy!)
  • The engine needs to be able to take back the graded image in, probably, 16bit EXR format

Personally, while I’m okay with the in-engine controls that UE4, etc., provide, nothing beats the flexibility of being able to use HDR LUTs…but it’s kind of a niche requirement that requires a good amount of knowledge on the part of users, so maybe this litany of tasks I outline above would be reason enough to start by shipping a simple in-engine grading toolset to begin with. :slight_smile:

There’s a lot more to say on color (specifically on why LUTs can be so great) but I think that’s a good place to start…

Oh, one last thing I forgot to mention:
In addition to a grading pass, a post-process fullscreen pass with access to the G-buffer/depth is incredibly useful for achieving certain looks. For example, film halation:

Because this look is spatially-varying, you can’t do it with simple grading controls or even a LUT - you need a post-process fullscreen pass. Often I find that to achieve difficult looks, I’m combining grading with some kind of spatially-varying post-process techniques, so it’s a great thing to have as well.

2 Likes

For the 2021.1 release we’re adding a few of these features: false-color mode and physically based camera parameters. I think we’ll end up extending both of them in a later release after we’ve had a chance to properly play around with them, but they should be pretty usable already.

I’m trying to implement these features when I have a bit of time in between bigger tasks so hopefully I can add more of them soon. Thanks for the great resources!

– Frank

1 Like