Ray tracing in a weekend (五)

之前在计算color时是直接在源文件中定义ray与sphere的hit监测函数,也没有创建sphere类。现在我们要将object创建成类,将hit函数封装起来,用的时候作为接口调用。当前的场景中只存在球体,我们将先创建一个只有纯虚hit函数的抽象基类,再由其派生出sphere类和hitable_list类。后者可以视为将场景中所有object整合为一个对象。

hitable.h

#ifndef HITABLEH
#define HITABLEH

#include"ray.h"

struct hit_record//视为结构体而不是类
{
	float t;
	vec3 p;
	vec3 normal;
};

class hitable//抽象基类
{
public:
	//纯虚函数
	virtual bool hit(const ray& r, float t_min, float t_max, hit_record& rec) const = 0;
};
#endif // HITABLEH

sphere.h

 

#ifndef SPHERE
#define SPHERE

#include"hitable.h"

class sphere :public hitable
{
public:
	sphere() {}
	sphere(vec3 cen, float r) :center(cen), radius(r) {};
	virtual bool hit(const ray&r, float tmin, float tmax, hit_record& rec) const;
	vec3 center;
	float radius;
};

bool sphere::hit(const ray& r, float t_min, float t_max, hit_record& rec) const
{
	vec3 oc = r.origin() - center;
	float a = dot(r.direction(), r.direction());
	float b = dot(oc, r.direction());
	float c = dot(oc, oc) - radius*radius;
	float discriminant = b*b - a*c;//这里变了
	if (discriminant > 0)
	{
		//如果较小t符合条件,那么计算出各值存储进hit_record中并返回真
		float temp = (-b - sqrt(b*b - a*c)) / a;
		if (temp<t_max&&temp>t_min)
		{
			rec.t = temp;
			rec.p = r.point_at_parameter(rec.t);
			rec.normal = (rec.p - center) / radius;
			return true;
		}
		//如果较小t不符合条件,取另一t来验证
		temp = (- b + sqrt(b*b - a*c)) / a;
		if (temp<t_max&&temp>t_min)
		{
			rec.t = temp;
			rec.p = r.point_at_parameter(rec.t);
			rec.normal = (rec.p - center) / radius;
			return true;
		}
	}
	//如果两个t都不符合条件,则返回假
	return false;
}
#endif

 hitable_list.h

#ifndef HITABLELIST
#define HITABLELIST

#include"hitable.h"

//hitable_list类实际上是把一系列(可能前后遮挡的)object视为了一个object来处理
class hitable_list :public hitable
{
public:
	hitable_list() {}
	hitable_list(hitable **l, int n) { list = l;list_size = n; }
	virtual bool hit(const ray& r, float tmin, float tmax, hit_record& rec) const;
	hitable **list;//相当于由指向hitable对象的指针组成的动态数组
	int list_size;
};

//这个算法的思想实际上和fundamentals of GC中讲到的一模一样
bool hitable_list::hit(const ray& r, float t_min, float t_max, hit_record& rec) const
{
	hit_record temp_rec;
	bool hit_anything = false;
	double closest_so_far = t_max;
	for (int i = 0;i < list_size;i++)//遍历一系列object中的每个个体
	{
		//判断当前个体是否与ray相交
		//list中的各object未必以先后顺序排列,这无所谓
		//因为只有满足t的范围的情况下相交才会重新定t的范围
		//t的范围实际上是越来越小了,最后取得的必是最小t
		if (list[i]->hit(r,t_min, closest_so_far, temp_rec))
		{
			hit_anything = true;
			//如果确实相交,那么把当前最大t置为现在交点的t,从而缩小t的范围
			closest_so_far = temp_rec.t;
			rec = temp_rec;//先记录下来rec
		}
	}
	return hit_anything;
}
#endif // !HITABLELIST

 RayTracer.cpp

#include"vector.h"
#include"ray.h"
#include"sphere.h"
#include"hitable_list.h"
#include<cfloat>
#include<math.h>
#include<iostream>
#include<fstream>

using namespace std;

//此处的world就是把整个场景里的所有object视为一体(即hitable_list)
vec3 color(const ray& r,hitable *world)
{
	hit_record rec;
	if (world->hit(r, 0.0, FLT_MAX, rec))
	{
		return 0.5*vec3(rec.normal.x() + 1, rec.normal.y() + 1, rec.normal.z() + 1);
		//还是将交点处的normal映射为rgb颜色
	}
	vec3 unit_direction = unit_vector(r.direction());//得到单位方向向量,将y限定在-1至1之间
	float t = 0.5*(unit_direction.y() + 1.0);//间接用t代表y,将其限制在0至1之间
	return (1.0 - t)*vec3(1.0, 1.0, 1.0) + t*vec3(0.5, 0.7, 1.0);
    //所谓插值法,不同的ray对应的t不同,这些t决定了其对应的color为(1.0,1.0,1.0)和(0.5,0.7,1.0)之间某一RGB颜色
	//RGB各分量实际就是一个介于0.0至1.0的小数
}
int main()
{
	int nx = 200;//200列
	int ny = 100;//100行
	ofstream out("d:\\theFirstPpm.txt");
	out << "P3\n" << nx << " " << ny << "\n255" << endl;
	vec3 lower_left_corner(-2.0, -1.0, -1.0);//image plain在camera frame中左下角坐标
	vec3 horizontal(4.0, 0.0, 0.0);//image plain在camera frame中水平方向的量度
	vec3 vertical(0.0, 2.0, 0.0);//image plain在camera frame中竖直方向的量度
	vec3 origin(0.0, 0.0, 0.0);
	hitable *list[2];//我们自己定义world是什么,此处定义为两个sphere
	list[0] = new sphere(vec3(0, 0, -1), 0.5);
	list[1] = new sphere(vec3(0, -100.5, -1), 100);
	hitable *world = new hitable_list(list, 2);//初始化world
	for (int j = ny - 1;j >= 0;j--)//行从上到下
	{
		for (int i = 0;i < nx;i++)//列从左到右
		{
			float u = float(i) / float(nx);//当前pixel在水平方向上的比例(相对位置)
			float v = float(j) / float(ny);
			//构造viewing ray,direction参数实际就是intersection在camera frame中的坐标
			ray r(origin, lower_left_corner + u*horizontal + v*vertical);//将左下角作为求坐标时的参考点
			vec3 p = r.point_at_parameter(2.0);
			vec3 col = color(r, world);
			int ir = int(255.99*col[0]);
			int ig = int(255.99*col[1]);
			int ib = int(255.99*col[2]);
			out << ir << " " << ig << " " << ib << endl;
		}
	}
	return 0;
}

最终输出图像:

 

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
For those that do not know me: My name is Jacco Bikker, also known as 'Phantom'. I work as '3D tech guy' at Overloaded, a company that develops and distributes games for mobile phones. I specialize at 3D Symbian games, which require highly optimized fixed-point, non-HW-accelerated 3D engines, crammed into 250Kb installers. So basically I'm having fun. As software rendering used to be my spare time activity, I was looking for something else. I tried some AI, which was great fun, and recently I dove into a huge pile of research papers on raytracing and related topics; such as global illumination, image based lighting, photon maps and so on. One document especially grabbed my attention. It's titled: "State-of-the-Art in Interactive Ray Tracing", and was written by Wald & Slusallek. I highly recommend this paper. Basically, it summarizes recent efforts to improve the speed of raytracing, and adds a couple of tricks too. But it starts with a list of benefits of raytracing over rasterization-based algorithms. And one of those benefits is that when you go to extremes, raytracing is actually faster than rasterizing. And they prove it: Imagine a huge scene, consisting of, say, 50 million triangles. Toss it at a recent GeForce with enough memory to store all those triangles, and write down the frame rate. It will be in the vicinity of 2-5. If it isn't, double the triangle count. Now, raytrace the same scene. These guys report 8 frames per second on a dual PIII/800. Make that a quad PIII/800 and the speed doubles. Raytracing scales linearly with processing power, but only logarithmically with scene complexity. Now that I got your attention, I would like to move on to the intended contents of this crash course in raytracing.
Ray Tracing(光线追踪)是一种在计算机图形学使用的技术,用于生成高度逼真的图像。它通过跟踪光线从视点开始的路径,来模拟光在场景的运动,计算出光线与物体的交点以及光线在经过物体时的反射、折射等效果,并最终生成图像。 以下是光线追踪的基本步骤[^1]: 1. 从相机位置发出一条光线。 2. 确定该光线与场景物体的交点。 3. 计算该交点处的光照强度,包括直接光照和间接光照。 4. 根据物体的表面特性,计算反射或折射光线的方向和强度。 5. 递归计算反射或折射光线的路径,直到达到最大递归深度或光线不再与物体相交。 6. 将所有光线的颜色值组合在一起,得到最终的图像。 下面是一个简单的 Python 代码示例,演示了如何使用 Pygame 和 PyOpenGL 库实现简单的光线追踪效果[^2]: ```python import pygame from OpenGL.GL import * # 初始化 Pygame 和 PyOpenGL pygame.init() display = (800, 600) pygame.display.set_mode(display, pygame.DOUBLEBUF | pygame.OPENGL) # 设置相机位置和方向 glMatrixMode(GL_MODELVIEW) glLoadIdentity() gluLookAt(0, 0, 0, 0, 0, -1, 0, 1, 0) # 设置场景的物体 glColor3f(1, 1, 1) glBegin(GL_TRIANGLES) glVertex3f(-1, -1, -5) glVertex3f(1, -1, -5) glVertex3f(0, 1, -5) glEnd() # 定义光线追踪函数 def raytrace(x, y): glReadBuffer(GL_BACK) color = glReadPixels(x, y, 1, 1, GL_RGB, GL_FLOAT) return color # 创建主循环 while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() quit() # 绘制场景和光线 glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) glBegin(GL_LINES) glVertex3f(0, 0, 0) glVertex3f(0, 0, -5) glEnd() # 调用光线追踪函数 x, y = pygame.mouse.get_pos() w, h = display color = raytrace(w - x, h - y) # 输出光线追踪结果 print("Color at (%d, %d): %s" % (x, y, color)) # 更新 Pygame 显示窗口 pygame.display.flip() ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值