【计算机图形学】习题课:Illumination, Shading and Texture

如果这篇文章对你有帮助,欢迎点赞与收藏~

Tongji University CS100433 Computer Graphics Assignment 3

1 What are the difference between Local illumination model and Global illumination model?

Global illumination model

Simulate not only the direct illuminations but also the indirect illuminations, considering reflectance of light
from other surfaces.

Can handle:
• Reflection (one object in another)
• Refraction (Snell’ s Law)
• Shadows
• Color bleeding

More computation and slow

Local illumination model

Considers light sources and surface properties only

Can approximate GI
• Environmental Mapping
• Ambient occlusion
• Image based lighting

Fast and real time

Not as accurate as GI

2 What is the purpose of material attribute?

The material attribute is a key component that defines how surfaces interact with light. It offers specific parameters that determine the appearance of objects when they are rendered.

For example, Shading Parameters: These include ambient, diffuse, and specular attributes, each playing a unique role in how light affects the surface of a material.

3 Can you approve Blinn-Phong is an approximation of Phong reflection model?

The Blinn-Phong shading model is indeed an approximation of the Phong reflection model.

  • Both models aim to simulate the specular highlight seen on shiny objects. The key difference lies in how they compute this highlight.
  • In the Phong model, as the angle between R and V increases, the specular highlight diminishes. However, calculating R involves more computational steps.
  • In the Blinn-Phong model, a similar effect is achieved by looking at the angle between H and N. As the angle between H and N increases, the specular highlight also diminishes.
  • Blinn-Phong is considered an approximation of the Phong model because it simplifies the calculation while achieving a similar visual effect. It tends to be computationally more efficient.

Drawing the figures of RV and HN can help visualize these relationships. In both diagrams, the angle’s increase correlates with a decrease in the intensity of the specular highlight.

image-20231129102203627

4 Why Phong shading produce better result than Guround shading?

Gouraud Shading (Vertex-Based Shading):

  • In Gouraud shading, the colors are computed at the vertices of a polygon, primarily using the vertex normals for lighting calculations.
  • These colors are then linearly interpolated across the surface of the polygon during rasterization.
  • While Gouraud shading is computationally efficient, it has limitations:
    • It can fail to accurately represent specular highlights, especially if these highlights don’t fall directly on a vertex.
    • The linear interpolation can lead to color banding and inaccuracies in the depiction of curved surfaces.

Phong Shading (Fragment-Based Shading):

  • Phong shading, on the other hand, interpolates the normal vectors across the surface of the polygon during rasterization.
  • Lighting calculations are then performed per pixel (or fragment), using these interpolated normals.
  • This approach has several advantages:
    • It results in smoother and more accurate shading, especially for specular highlights. The highlights are more realistically distributed across the surface, irrespective of the polygon’s vertices.
    • Phong shading better captures the nuances of curved surfaces, leading to a more realistic representation of the way light interacts with the material.
image-20231129105744242

5 What information can be stored in a Texture?

  1. Color Information: It involves storing the color of each point on a surface. The texture map contains an image, and the colors from this image are applied to the surface of the 3D object.
  2. Surface Material Properties: Textures can define the properties of a surface material, such as roughness, shininess, or transparency. This allows for more realistic rendering of different types of materials like metal, glass, fabric, etc.
  3. Bump Mapping: This technique uses textures to simulate small-scale bumpiness on the surface of an object. It does this by manipulating the surface normals during the shading process, giving the illusion of depth and detail without actually changing the shape of the surface.
  4. Displacement Mapping: Similar to bump mapping, displacement mapping actually modifies the geometry of the surface based on the texture. This allows for much more detailed and realistic representations of complex surfaces.
  5. Normal Maps: These are a type of texture used to add lighting details to a surface. Normal maps store normals – vectors perpendicular to a surface – that are used in the lighting calculations to create the illusion of a more complex surface.
  6. Specular Maps: These textures define the specular reflectivity of a surface. They are used to control how shiny or reflective a surface appears.
  7. Ambient Occlusion Maps: These are used to store information about how exposed each part of a surface is to ambient lighting. This helps to add depth and realism by simulating the way light behaves in small, enclosed spaces.
  8. Reflection and Environment Maps: These are used to simulate reflective and refractive surfaces by storing information about the environment that surrounds the object.

6 Why the Texture coordinates require perspective correction?

Texture coordinates (usually UV coordinates) are defined on the surface of a 3D model. When these coordinates are projected onto the 2D screen along with the object’s geometry, linear interpolation directly in screen space would ignore depth information. This means that the texture would be mapped incorrectly, especially in areas where the surface of the object is at an angle or far from the camera, causing the texture mapping to appear distorted or stretched.

image-20231129105526736

To address this issue, perspective correction is introduced. Perspective correction ensures the correct mapping of textures by combining texture coordinates with depth information (Z value). This is typically done by using the homogeneous form of the texture coordinates for interpolation (e.g., using (u/w, v/w) instead of simply (u, v)). This approach accounts for the non-linear nature of texture coordinates as they vary with depth, ensuring a more accurate representation of texture mapping in screen space.

7 What is the cause of aliasing in Textures?

  1. Texture Magnification at Close Range: When a surface is close to the camera, a single texel might be projected onto multiple pixels. This leads to an over-magnified display of the texture’s details, resulting in a discretized visual effect. In such cases, the texture, which should appear smooth, can become coarse and lose detail, resembling a pattern of small blocks.
  2. Texture Minification at Far Range: Conversely, when a surface is far from the camera, several texels might map to a single pixel. In this scenario, the color value of that pixel may be determined by just one of those texels. Consequently, slight changes in the camera’s position can cause drastic changes in pixel values, as different texels get mapped to the same pixel.
    在这里插入图片描述

8 What are the rendering equation and reflection equation?

rendering equation

The rendering equation is a comprehensive formula that describes how light is transported around a scene. It’s an integral equation that determines the color and brightness of a pixel in a rendered image. The equation is as follows:
L o ( p , ω o ) = L e ( p , ω o ) + ∫ H 2 f r ( p , ω i → ω o ) L i ( p , ω i ) c o s θ d ω i L_o(\mathbf{p},\omega_o)=L_e(\mathbf{p},\omega_o)+\int_{\mathcal{H}^2}f_r(\mathbf{p},\omega_i\to\omega_o)L_i(\mathbf{p},\omega_i)cos\theta d\omega_i Lo(p,ωo)=Le(p,ωo)+H2fr(p,ωiωo)Li(p,ωi)cosθdωi

reflection equation

The reflection equation is a simplified form of the rendering equation, where the emitted light term L e ( p , ω o ) L_e(\mathbf{p},\omega_o) Le(p,ωo) is omitted. This is typically used for surfaces that do not emit light by themselves:
L o ( p , ω o ) = ∫ H 2 f r ( p , ω i → ω o ) L i ( p , ω i ) c o s θ d ω i L_o(\mathbf{p},\omega_o)=\int_{\mathcal{H}^2}f_r(\mathbf{p},\omega_i\to\omega_o)L_i(\mathbf{p},\omega_i)cos\theta d\omega_i Lo(p,ωo)=H2fr(p,ωiωo)Li(p,ωi)cosθdωi

程序链接:

This OpenGL-based 3D rendering project focuses on showcasing multiple light sources, various material handling, and skybox effects. Utilizing libraries such as GLFW, GLAD, and GLM, it creates windows, processes user inputs, and performs mathematical operations. The project’s essence lies in employing OpenGL rendering techniques to display different types of light sources (like directional lights, point lights, and spotlights) and material effects (such as diffuse and specular mapping). Additionally, by implementing the skybox feature, the project enhances the depth and immersion of the 3D scene. Users can control the camera using the keyboard and mouse, observing objects with different materials illuminated by various light sources, surrounded by a skybox environment.

欢迎Star:
https://github.com/Zhu-Shatong/OpenGL-CG-TextureAndLights

https://www.bilibili.com/video/BV1iG411S7TG/

{8A94C9BF-04C6-478e-B1F5-1B40083F1AF5}

  • 20
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值