近距离观察Tone mapping.

原文: http://mynameismjp.wordpress.com/2010/04/30/a-closer-look-at-tone-mapping/


A CLOSER LOOK AT TONE MAPPING

A few months ago my coworker showed me some slides from a presentation by tri-Ace regarding their game “Star Ocean 4″.  The slides that really caught my eye were pages 90 to 96, where they discussed their approach to tone mapping. Instead of using the standard Reinhard tone mapping operator that everybody is so fond of, they decided to instead use curves based on actual specifications from different film types and CMOS sensors. This not only produced some really nice results (the screenshots in the slides speak for themselves), but it fit very nicely into their “virtual camera” approach towards post processing.  While I was intrigued by their approach, it wasn’t until I read through John Hable‘s recent presentation on gamma and HDR lighting that I decided to start doing my own research.  His presentation gave an overview of Uncharted 2’s approach to tone mapping, which (like Star Ocean 4) eschews Reinhard’s operator in favor of mimicking a filmic response curve. Once again the images in the slides speak for themselves, and they intrigued me enough to make me dig deeper.

Like always, I started off by making a test application that would let me try out different approaches and observe their results. Initially my app started out with approach taken by pretty much every other HDR sample out there: render a model and a skybox to a floating-point texture, calculate the log luminance of the scene and repeatedly downsample to determine a single log-average luminance value, and then use that value in Reinhard’s tone mapping equations to scale pixel values down to the visible range (if you’re not familar, this “standard” approach is outlined in detail here). At this point I thought I would just copy over Hable’s equations and I would have something nice…but after some ugly results I realized I needed to take a step back and rethink the process a bit. After some experimentation and  a bit of reading through High Dynamic Range Imaging, I started to think of the whole process in terms of a more generalized approach:

1. Run a (simplified) light transport simulation, and determine the amount of incoming light energy for each pixel. This is done by rendering all objects in the scene, and determining the energy reflected off an object’s surface towards the camera/eye. Ideally for this step we would use radiometric units (radiance/irradiance) to represent light intensity and we would also maintain the distribution of that energy across the entire visible spectrum, but to actually make this feasible on graphics hardware we run the simulation for 3 discrete wavelengths (red, green, and blue).  In my app, this step is performed by rendering a single mesh and sampling an HDR environment map to determine the amount of light reflected off the surface.  For a background the environment is sampled directly by a skybox.

2. Scale the incoming light to determine the amount that would hit the film/sensor/retina. This step is referred to as “calibration.”  One possible way to implement this stuff is to model a camera, where the total amount of light that hits the film is affected by the focal length of the lens, the aperture size(f-number), and the shutter speed. Together they can be manipulated to scale range of incoming light intensities such that the important parts of the scene are neither under-exposed nor over-exposed. In my app I kept things simple, and exposed three different methods for calibration:

  • Manual exposure: a slider lets you choose values between -10 to 10. The HDR pixel value is then scaled by 2^exposure.
  • Geometric mean of luminance: this is pretty much the exact approach outlined in Reinhard’s paper, where the geometric mean (log average) of scene luminance is calculated and used to scale the luminance of each pixel. With this approach a “key value” is user-controlled, and is meant to be chosen based on whether the scene is “high-key” (bright, low contrast) or “low-key” (dark, high contrast).
  • Geometric mean, auto key value: Same as above, except that the  key value is automatically chosen using Equation 10 from this page.

To calculate the geometric mean, I simply calculate the log of luminance and write the results to a 1024×1024 texture. I then call GenerateMips to automatically generate the full mip-map chain. At that point I can apply exp() to the last mip level to get a full log-average of the scene. One extra trick I added to my app was a slider that let you choose the mip level that would be sampled when scaling the pixel intensities. Doing this allows you to essentially use local averages rather than a global average, which lets you have different exposure values for different parts of the image.  In my app, there’s a display below the tone curve that shows the average luminance value being used for each part of the image.

3. Map calibrated light intensities to display values by applying a tone curve to either RGB values or luminance values. This curve can have a significant impact on not only which details are visible in the final image, but also the overall visual characteristics. Because of this I find it difficult selecting the right curve for a particular scene…in some cases you can pretty objectively determine that one curve is better than another at making details visible, but at the same time some curves will subjectively look better to my eyes due to their resulting  levels of contrast and saturation.  My app offers a variety of curves to choose from, including:

Now for the exciting part: pictures! For this first set of shots, I used an an HDR environment map taken from the Ennis House. I liked this map because it gave a great test case for detail preservation: a mostly-dark room, with an extremely bright window through which a landscape is visible. For reference, this is what the shot looks like with no exposure or tone mapping applied:

Here’s what the shot looks like for each tone mapping curve, with “auto-exposure” applied using a global geometric mean:

   

   

Both Drago and Reinhard look pretty decent in this case, while with filmic you pretty much lose everything in the darks and in the brights. The Uncharted 2 curve doesn’t have such a strong toe so the blacks aren’t crushed, and the contrast is a bit better than in Reinhard. But you do lose the coloring in the sky with both filmic curves, since those curves are applied to the RGB channels which means color ratios aren’t preserved like they are when you tone map luminance.  However I think the sky looks rather unnatural in Drago and Reinhard, despite the colors being preserved.

For this next set, I sampled the 9th mip level which essentially gives you a 2×2 grid of local luminance averages. This essentially applies a higher exposure to the left portion of the image, and lower exposure to the right portion.

   

   

Using local averages works pretty well for the filmic techniques. Areas that used to be underexposed or overexposed now clearly show more detail, and the overall the image has a nice level of contrast and saturation. Reinhard and Drago, on the other hand, look more washed-put than they did previously.

Here’s some other assorted screenshots I took using other environment maps, and with bloom enabled:

   

   
   

   

   

   

Overall I like the look of the filmic curves. It might just be that I watch too many movies and I’m used to that kind of look, but I just think the image looks more natural. I’m sure plenty of people would disagree with me though, especially since Reinhard and Drago are much better at preserving details across a wide range of intensities.

If you’d like to play around with the app itself, I’ve uploaded the code, content, binaries, and VS2010 project here:

ToneMapping.part01.rar
ToneMapping.part02.rar
ToneMapping.part03.rar

Sorry about it being in 3 parts…together they total 174MB and SkyDrive has a 50MB limit per file. If you’re wondering why the app is so big, it’s because I ran the HDR environment maps through ATI’s CubeMapGen to generate some really nice mipmaps (it does proper angular extent filtering so that there’s no seams in lower mip levels) and that app can only save HDR cube maps in uncompressed floating point formats. But on the upside they have really nice mips…in fact I use a low mip level for faking diffuse lighting on the mesh.



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值