Still brand new to the engine but couldn’t wait to get my mitts on it to try out. I have a few requests and while I am capable of implementing some of this stuff myself in C plugins, it’ll take me a bit to get up to speed with the API, so I figured it would be useful to share feedback in the meantime.
It’s in EV right now for both analytic and directional lights, which is a logarithmic scale. I confirmed by constraining my in-camera exposure to 16 and matching it with a “16” intensity directional light.
It would be nice to have a way to adjust lights linearly as well. Most real-time engines seem to default to linear units, most offline renderers offer both logarithmic (stop-based) and linear lighting adjustments.
Also, having a physical basis for the units would be nice. You already have EV100 and I see in the code you have gone some ways to making your exposure physically plausible, so it’s only a small jump to getting photometric or at least radiometric units in the engine. This way I can set lights like I would in the real world and validate them in-engine.
Speaking of validation…
I’m sure you’re aware, but the lighting debug modes are very bare right now. Again, nothing a plugin can’t fix, but just putting it out there. The EV100 visualization mode isn’t much use without at least a readout of scene-linear RGB values at the center of the screen or at my mouse cursor. The histogram is similarly hampered by a general lack of readability although I can kinda see what it’s going for.
On top of those, I would love to see a proper waveform monitor, a color scope (measuring saturation), and a false color mode which shows areas of critical over and under-exposure.
Is pre-exposure implemented? I seem to get lighting anomalies at anything approaching EV16 or so intensity lights, and I assume this is hitting the wall of typical floating point range. It’s common to pre-expose lights based on the current engine-exposure to get around this out-of-range problem.
Like the light units, I would love having some physical basis for camera controls. Having actual focal length and image sensor size setups, at least. In addition, tying all the post-process stack together with a physical camera and having the physical camera controls actually drive post-process like DoF, lens flare, etc., I feel is a modern approach rather than “gamey” arbitrary controls for DoF and so on.
Having more transparency around what tonemapper and what colorspace is being used is critical for having workable solutions for colorgrading - specifically LUT-based HDR colorgrading. Some more info here would be great.
Thanks for listening! I’ll continue to dig in and hopefully contribute in some way.