kajiya-kay 头发

https://github.com/ak-TechArtist/AnisotropicHair 各向异性头发
https://www.jianshu.com/p/7dc980ea4c51
http://www.artisticexperiments.com/cg-shaders/cg-shaders-kajiya-kay 最后有shader
https://blog.csdn.net/noahzuo/article/details/51162472
https://blog.csdn.net/yangxuan0261/article/details/89027809

abstract.
we present a method for rendering scenes with fine detail via an object called a texel, a rendering primitive inspired by volume densities mixed with anisotropic lighting models. this technique solves a long outstanding problem in image synthesis: the rendering of furry surfaces.

introduction
rendering scenes with very high complexity and a wide range of detail has long been an important goal for image synthesis. one idea is to introduce a hierarchy of scale, and at each level of scale have a corresponding level of detail in a hierarchy of geometric models (Crow 1982). thus very complex small objects may have a hierarchy of progressively simplified geometric representations.

however, for very fine detail, a significant problem has so far prevented the inclusion of furry surfaces into synthetic images. the conventional approach gives rise to a severe, intractable aliasing problem. we feel that this aliasing problem arises because geometry is used to define surfaces at an inappropriate scale. an alternative approach is to treat fine geometry as texture rather than geometry. we explore that approach here.

this paper presents a new type of texture map, called a texel, inspired by the volume density (Blinn 1982). a texel is a 3-dimensional texture map in which both a surface frame——normal, tangent, and binormal and the parameters of a lighting model are distributed freely throughout a volume. a texel is not tied to the geometry of any particular surface. Indeed, it is intended to represnet a highly complex collection of surfaces contained within a defined volume. because of this the rendering time of a texel is independent of the geometric complexity of the surfaces that it extracts. in fact, with texels, one can dispense with the usual notion of geometric surface models altogether. that is, it is possible to render texels directly, foregoing referents to 前面提到的 any defined surface geometry.

we will use the idea of texels to represnet fuzzy surfaces and present an algorithm for rendering such surfaces.

review of high complexity rendering

many attempts to model scenes with very high complexity have been made. one method is to attack the problem by brute force computing. a very early effort by Csuri, et al. (1979) generated images of smoke and fur with thousands of polygons. more recently, Weil(1989) rendered cloth with thousands of Lambert cylinders. unfortunately, at a fairly large scale, microscopic geometric surfaces give rise to severe aliasing artifacts that overload traditional antialiasing methods. these images tend to look brittle脆: that is, hairs tend to look like spines.

the brute force method fails because the desired detail should be rendered through textures and lighting modles rather than geometry. what is desired is the painter’s illusion, a suggestion that there is detail in the scene far beyond the resolution of the image. when one examines a painting closely the painter’s illusion falls apart: zooming in on a finely detailed object in a painting reveals only meaningless blotches 斑点 of color.

the most successful effort to render high complexity scenes are those based on particle systems (Reeves 1983, Reeves and Blan 1985). we believe their success is due in part to the fact that particle systems embody the idea of rendering without geometry. along the path of the particle system, a lighting model and a frame are used to render pixels directly rather than through a notion of detailed microgeometry. in some sense, this paper represents the extension of particle systems to ray tracing. as the reader will readily discern辨别, even though our rendering algorithm is radically 从根本上 different, particle systems and texels are complementary, e.g. particle systems could be used to generate texel models. indeed, this paper can be modified to render particle systems in a manner that is independent of the number of particles rendered.

gavin miller in (Miller 1988) advanced a solution that uses a combination of geometry and a sophisticated lighting model much in the spirit of this paper to make images of furry animals. however, like particle systems, the complexity of the geometric part of his algorithm is dependent of the number of haris.

the idea of texels is inspired by Blinn’s idea for rendering volume densities (Blinn 1982). Blinn presented an algorithm to calcualte the appearance of a large collection of microscopic 微观的 spherical particles uniformly distributed in a plane. this enabled him to synthesize images of clouds and dust and the rings of Saturn 土星. becaue Blinn was interested in directionally homogeneous atmospheres, he analytically integrated his equations to yield a simple lighting model.

in Kajiya and Von Herzen (1984), blinn’s equations were solved for nonhomogeneous media by direct computation. it was essentially a volume rendering technique for ray tracing. because our work is based on that earlier effort, we now briefly discuss the relevant equations from Kajiya and Von Herzen (1984).

as a beam of light travels through a volume of spherical particles, it is scatterd and attenuated. the attenuation is dependend on the local density of the volume along the ray. the scattering is dependent on the density of the particles scattering the light and the albedo of each particle. the amount of scattering varies in different directions due to the particle partially occluding scattering in certain directions. this scattered light then is attenuated and rescattered by other particles.

this model ignores diffraction 衍射 around scattering particles.
in ray tracing, we follow light rays from the eye backwards toward the light sources (figure 1). the progressive attenuation along the way due to occluding particles is computed for each point along a ray emanating from the eye. at each point on the ray through the volume, we measure the amount of light the scatters into the direction towards the eye. this light is then integrated to yield the total light reaching the eye. in this work we use Blinn’s low albedo single scattering approximation. that is, we assume that any contribution from multiple scattering is negligible 可以忽略不计的. we assume that the light is scattered just once from the light source to the eye. the accuracy of this assumption is relatively good for low albedo particles and suffers as the albedo increases (Blinn 1982, Rushmeier and Torrance 1987).

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值