PBRT_V2 总结记录 <3> SamplerRendererTask 和 SamplerRenderer.Li

SamplerRendererTask 类

class SamplerRendererTask : public Task {
public:
	// SamplerRendererTask Public Methods
	SamplerRendererTask(const Scene *sc, Renderer *ren, Camera *c,
		ProgressReporter &pr, Sampler *ms, Sample *sam,
		bool visIds, int tn, int tc)
		: reporter(pr)
	{
		scene = sc; renderer = ren; camera = c; mainSampler = ms;
		origSample = sam; visualizeObjectIds = visIds; taskNum = tn; taskCount = tc;
	}
	void Run();
private:
	// SamplerRendererTask Private Data
	const Scene *scene;
	const Renderer *renderer;
	Camera *camera;
	Sampler *mainSampler;
	ProgressReporter &reporter;
	Sample *origSample;
	bool visualizeObjectIds;
	int taskNum, taskCount;
};

 

类的作用 : 主要是与SamplerRenderer 相关,负责渲染图片的子区域。

 

构造函数 :

SamplerRendererTask(const Scene *sc, Renderer *ren, Camera *c, Sampler *ms, Sample *sam, int tn, int tc)

{
scene = sc; renderer = ren; camera = c; mainSampler = ms;
origSample = sam; taskNum = tn; taskCount = tc;
}

注意 : (利用 taskNum 和 taskCount 来确定这个Task负责渲染图片的那一块区域)

Note that it is also given a task number, taskNum,
and the total number of tasks launched, taskCount. From these two values, the task will
later be able to determine which part of the image it is responsible for computing.

 

 

void SamplerRendererTask::Run()  代码细节:

void SamplerRendererTask::Run() {
	// Get sub-_Sampler_ for _SamplerRendererTask_

	// create a single tile subsampler
	Sampler *sampler = mainSampler->GetSubSampler(taskNum, taskCount);
	if (!sampler)
	{
		reporter.Update();
		return;
	}

	// Declare local variables used for rendering loop
	MemoryArena arena;
	RNG rng(taskNum);

	// Allocate space for samples and intersections
	int maxSamples = sampler->MaximumSampleCount();

	// 克隆一个 "maxSamples个origSample" 数组出来
	Sample *samples = origSample->Duplicate(maxSamples);

	RayDifferential *rays = new RayDifferential[maxSamples];
	Spectrum *Ls = new Spectrum[maxSamples];
	Spectrum *Ts = new Spectrum[maxSamples];
	Intersection *isects = new Intersection[maxSamples];

	// Get samples from _Sampler_ and update image
	int sampleCount;
	while ((sampleCount = sampler->GetMoreSamples(samples, rng)) > 0) {
		// Generate camera rays and compute radiance along rays
		for (int i = 0; i < sampleCount; ++i) {
			// Find camera ray for _sample[i]_

			// 利用sample生成world space的ray, rayWeight 是 1.0f
			float rayWeight = camera->GenerateRayDifferential(samples[i], &rays[i]);
			rays[i].ScaleDifferentials(1.f / sqrtf(sampler->samplesPerPixel));

			// Evaluate radiance along camera ray
			if (visualizeObjectIds) {
				if (rayWeight > 0.f && scene->Intersect(rays[i], &isects[i])) {
					// random shading based on shape id...
					uint32_t ids[2] = { isects[i].shapeId, isects[i].primitiveId };
					uint32_t h = hash((char *)ids, sizeof(ids));
					float rgb[3] = { float(h & 0xff), float((h >> 8) & 0xff),
						float((h >> 16) & 0xff) };
					Ls[i] = Spectrum::FromRGB(rgb);
					Ls[i] /= 255.f;
				}
				else
					Ls[i] = 0.f;
			}
			else {
				if (rayWeight > 0.f)
					Ls[i] = rayWeight * renderer->Li(scene, rays[i], &samples[i], rng,
					arena, &isects[i], &Ts[i]);
				else {
					Ls[i] = 0.f;
					Ts[i] = 1.f;
				}

				// Issue warning if unexpected radiance value returned
				if (Ls[i].HasNaNs()) {
					Error("Not-a-number radiance value returned "
						"for image sample.  Setting to black.");
					Ls[i] = Spectrum(0.f);
				}
				else if (Ls[i].y() < -1e-5) {
					Error("Negative luminance value, %f, returned "
						"for image sample.  Setting to black.", Ls[i].y());
					Ls[i] = Spectrum(0.f);
				}
				else if (isinf(Ls[i].y())) {
					Error("Infinite luminance value returned "
						"for image sample.  Setting to black.");
					Ls[i] = Spectrum(0.f);
				}
			}
		}

		// Report sample results to _Sampler_, add contributions to image
		if (sampler->ReportResults(samples, rays, Ls, isects, sampleCount))
		{
			for (int i = 0; i < sampleCount; ++i)
			{
				camera->film->AddSample(samples[i], Ls[i]);
			}
		}

		// Free _MemoryArena_ memory from computing image sample values
		arena.FreeAll();
	}

	// Clean up after _SamplerRendererTask_ is done with its image region
	camera->film->UpdateDisplay(sampler->xPixelStart,
		sampler->yPixelStart, sampler->xPixelEnd + 1, sampler->yPixelEnd + 1);

	delete sampler;
	delete[] samples;
	delete[] rays;
	delete[] Ls;
	delete[] Ts;
	delete[] isects;
	reporter.Update();
}

 

a. 采样器 思路 

Sampler *sampler = mainSampler->GetSubSampler(taskNum, taskCount);
 

解析: (利用taskNum, taskCount来获得 subSampler,  GetSubSampler的思路就是,把图片切成 taskCount 块矩形,然后subSampler只负责第 taskNum 个矩形的那块区域的渲染,而且 GetSubSampler 计算出 第 taskNum 个矩形区域 的 左上角坐标x0,y0,和右下角坐标 x1, y1,保存到SubSampler中)

Here, the SamplerRendererTask uses the Sampler::
GetSubSampler() method to get a new Sampler that only generates samples for the subset
of the image that the SamplerRenderer is responsible for
. The GetSubSampler() method
uses the task number and the total number of tasks passed to the SamplerRendererTask
constructor to determine which part of the image the returned subsampler should generate
samples for.

 

int maxSamples = sampler->MaximumSampleCount();

解析 : (通过看代码这个 MaximumSampleCount 返回大多数都是 nPixelSamples,那就是一个像素对应多少个采样点)

The Sampler::MaximumSampleCount() method
returns an upper bound on the number of samples it will return at once. Given this
bound, that number of Samples are created to store the values returned by the Sampler.

 

Sample *samples = origSample->Duplicate(maxSamples);

解释: (这里就 利用 origSample 采样点模板来克隆足够多的采样点来使用)

 

while ((sampleCount = sampler->GetMoreSamples(samples, rng)) > 0)

解释:(在while中调用 GetMoreSamples 函数, 如果GetMoreSamples 返回0表示已经完成生成这个子区域的所有的采样点,其实就是,每一次循环都是在处理一个Pixel,每处理一个Pixel 都会调用 GetMoreSamples来初始化 samples 采样点数组,GetMoreSamples 处理的Pixel 其实就是在自己的(x0y0, x1y1) 的图片区域内的)

Each time through the loop, Sampler::GetMoreSamples() is called to initialize the samples
array with one or more image sample values; this method returns the number of samples
it initialized, or zero when it has finished generating all of the samples for the region of the
image that it is responsible for.

 

b. 生成射线

float rayWeight = camera->GenerateRayDifferential(samples[i], &rays[i]);
 

解释:(Camera 提供了 2个 主要的方法来生成 Ray,第一个是GenerateRay方法,GenerateRay方法 利用一个采样点生成Ray,第二个是 GenerateRayDifferential 方法,GenerateRayDifferential 方法 利用一个采样点生成 RayDifferential,RayDifferential 与 Ray 最大的区别就是,RayDifferential 除了保存 当前采样点(imageX, imageY) 的 Ray之外,还会保存 采样点(imageX+1, imageY)和采样点(imageX, imageY+1) 的Ray。)

The Camera interface provides two main methods: Camera::GenerateRay(), which returns
the ray for a given image sample position, and Camera::GenerateRayDifferential(),
which returns a ray differential, which incorporates(包含) information about the rays that the
Camera would generate for samples that are one pixel away on the image plane in both the
x and y directions.Ray differentials are used to get better results from some of the texture
functions defined in Chapter 10, making it possible to compute how quickly a texture
varies with respect to the pixel spacing, a key component of texture antialiasing. While
the Ray class holds just the origin and direction of a single ray, RayDifferential inherits
from Ray so that it has not only those member variables but also two additional Rays, rx
and ry, to hold these neighbors. One important detail is that the direction vector of the
generated ray must be of unit length; most of the integrators depend on this property
.

(解释 rayWeight 的作用:)

The camera also returns a floating-point weight associated with the ray. For simple camera
models, each ray is weighted equally, but more complex Cameras that more accurately
model the process of image formation by lens systems may generate some rays that contribute
more than others.
Such a camera model might simulate the effect of less light
arriving at the edges of the film plane than at the center, an effect called vignetting. The
returned weight will be used here to scale the ray’s contribution to the image.

 

 

 

c. 缩放射线

rays[i].ScaleDifferentials(1.f / sqrtf(sampler->samplesPerPixel));

解析:(在 GenerateRayDifferential() 方法计算RayDifferential ,得到的RayDifferential 保存 的 是 3条Ray的信息,分别是(imageX, imageY) ,(imageX+1, imageY) ,(imageX, imageY+1) ,这样 3条 Ray每条都是相差一个Pixel,为什么要执行 ScaleDifferentials,其实就是为了让 RayDifferential  保存的Ray 不相差一个Pixel距离,而是相差一个采样点的距离,这样的原因就是 : 当渲染高质量的图片是,一个Pxiel里面的采样点经常会进行平均,把3条Ray都弄到一个Pixel里面,就是为了让更加多的Ray来计算这个Pxiel的颜色。)

The GenerateRayDifferential() method is passed a Sample and a pointer to a ray differential;
it initializes all of the fields of the RayDifferential based on the contents of
the sample to give the differential rays as if a single sample is being taken for each pixel
(i.e., with an implicit assumption that image samples are spaced one pixel apart). However,
when rendering high-quality images, many samples are often averaged together to
compute each pixel value. Therefore, the ScaleDifferentials() methods scales the differential
rays to account for the actual spacing between samples on the film plane.

 

 

 

d. 自适应采样(adaptive sampling)

if (sampler->ReportResults(samples, rays, Ls, isects, sampleCount))

解释:(这里的ReportResults(),主要用在自适应采样,把采样点数组,射线数组,相交点数组传回入去采样器里面,利用这些数据来决定是否抛弃现在这些数据,重新生成一些新的数据使用,也决定是否把现在的数据写入到Film中,作为一部分渲染结果)

After the radiance carried by the rays is known, the image can be updated. Before this
happens, the Sampler::ReportResults() method is used to pass the radiance values and
information about the intersections found back to the Sampler; this gives the sampler a
chance to incorporate(包含) information from the results of these samples into the samples it
generates later. (For example, it could generate extra samples in pixels that have a lot of
detail.) This method returns true if this group of samples should be added to the image,
or false if it should be discarded. For some adaptive sampling algorithms, the sampler
may want to discard the initial set of samples and generate new ones in their stead.

Samplers may implement the ReportResults() method; it allows the renderer to report
back to the sampler which rays were generated, what radiance values were computed,
and the intersection points found for a collection of samples originally from
GetMoreSamples().The sampler may use this information for adaptive sampling algorithms,
deciding to take more samples close to the ones that were returned here.

The return value indicates whether or not the sample values should be added to the image
being generated.For some adaptive sampling algorithms, the sampler may want to cause
an initial collection of samples(and their results) to be discarded, generating a new set to
replace them completely.Because most of the Samplers in this chapter do not implement
adaptive sampling, a default implementation of this method just returns true.

 

 

SamplerRenderer.Li  代码细节 :

Spectrum SamplerRenderer::Li(const Scene *scene,
	const RayDifferential &ray, const Sample *sample, RNG &rng,
	MemoryArena &arena, Intersection *isect, Spectrum *T) const {

	Assert(ray.time == sample->time);
	Assert(!ray.HasNaNs());
	// Allocate local variables for _isect_ and _T_ if needed
	Spectrum localT;
	if (!T) T = &localT;
	Intersection localIsect;
	if (!isect) isect = &localIsect;

	Spectrum Li = 0.f;
	if (scene->Intersect(ray, isect))
		Li = surfaceIntegrator->Li(scene, this, ray, *isect, sample,
		rng, arena);
	else {
		// Handle ray that doesn't intersect any geometry
		for (uint32_t i = 0; i < scene->lights.size(); ++i)
			Li += scene->lights[i]->Le(ray);
	}

	Spectrum Lvi = volumeIntegrator->Li(scene, this, ray, sample, rng,
		T, arena);

	return *T * Li + Lvi;
}

a. Scene - Ray 相交

Scene::Intersect()

解释:(先判断参数的Ray是否与Scene的物体相交)

The Scene::Intersect() method finds the first surface that the ray intersects by passing
the request on to an accelerator Primitive. The accelerator performs ray-primitive
intersection tests with the geometric Primitives that the ray potentially intersects, using
each shape’s Shape::Intersect() routine. If an intersection is found, this method
returns true and returns a filled-in Intersection object.

 

b. 渲染

SurfaceIntegrator::Li() 和 VolumeIntegrator::Li()

解释:(TLi+ Lv,其实最主要关心的就是 SurfaceIntegrator::Li()  和 VolumeIntegrator::Li() 的实现)

The SamplerRenderer::Li()
method then calls SurfaceIntegrator::Li() to compute the outgoing radiance Lo from
the first surface that the ray intersects and then stores the result in Li. It then invokes
VolumeIntegrator::Transmittance() to compute the fraction of light T that is
extinguished between the surface and the camera due to participating media. Finally,
VolumeIntegrator::Li() determines the radiance Lv added along the ray due to interactions
with participating media. The net effect of these interactions is TLi+ Lv; this
calculation is explained further in Section 16.1.

 

c. Scene-Ray 没有相交的情况

for (uint32_t i = 0; i < scene->lights.size(); ++i)
Li += scene->lights[i]->Le(ray);

解释:(如果Ray与Scene场景的物体没有任何相交的话,就看看所有一些特殊的光源,这些特殊的光源就算无法与Ray相交,都会 ontribute radiance to rays 

Rays that don’t hit any geometry may still carry radiance—certain types of light sources
may not be associated with any geometry but can still contribute radiance to rays that do
not hit anything.
For example, the Earth’s sky illuminates points on the Earth’s surface
with blue light, even though there isn’t geometry associated with the sky per se. Therefore,
for rays that do not hit anything, each light’s Light::Le() method is called so that
these particular lights can contribute to the ray’s radiance. Most light sources will not
contribute in this way, but in certain cases this is very useful. See Section 12.5 for an
example of such a light.

 

所以接下来主要就是要 了解 :

SurfaceIntegrator::Li() 和 VolumeIntegrator::Li()

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值