D3D 批次batch [Direct3D] 实现批次渲染、硬件 T&L 的渲染器和 D3DPipeline

Q:

“Batch, Batch, Batch:”
What Does It Really Mean?

A:
• Every DrawIndexedPrimitive() is a batch
– Submits n number of triangles to GPU
– Same render state applies to all tris in batch
– SetState calls prior to Draw are part of batch

You get X batches per frame,
X mainly depends on CPU spec.

How Many Triangles Per Batch?
• Up to you!
– Anything between 1 to 10,000+ tris possible
• If small number, either
– Triangles are large or extremely expensive
– Only GPU vertex engines are idle
• Or
– Game is CPU bound, but don’t care because
you budgeted your CPU ahead of time, right?
– GPU idle (available for upping visual quality)

下面是参考资料:

这个看代码里面batch相关的。

[Direct3D] 实现批次渲染、硬件 T&L 的渲染器和 D3DPipeline



在是否从 D3DRender 提供顶点缓存区操作给流水线时做了一些权衡,最后决定暂时使用 IDirect3DDevice9::DrawPrimitiveUP 来渲染,因为它更容易书写,而且开销是一次顶点拷贝,流水线也不用操心对缓存的使用。

D3DPipeline 并不是完整的,其涉及到从场景管理器中传递的静态场景元素列表,这些元素需要事先被整理到各个子容器以便尽可能少地调整渲染状态和写顶点缓存。这些子容器由场景管理器维护,并在适当的时候调用 Render::DrawPrimitive 进行渲染。

大多数的 los-lib 结构与 D3DX 在内存上兼容的,在保持界面独立的同时不影响性能。例如 los::blaze::Material 与 D3DMATERIAL 即是兼容的。灯光定义则存在差异,主要原因在于 los-lib 使用了各个独立的灯光类型,而 D3DLIGHT9 则放置在统一的结构当中,当然,灯光对象通常并不在多个渲染状态间改变,所以执行两种灯光类型数据的转换并不影响效率。一桢通常仅进行一次这样的转换。

另一个容易犯的错误在于几何体法线列表的索引,法线为每个顶点索引设置独立的值,而不再通过顶点列表的索引形式,尝试使用顶点索引来查找法线将得到非预期的结果。

D3DRender:

  virtual int DrawPrimitive(const std::vector<VertexXYZ_N>&  listVertex
        , 
const Matrix& matWorld, const Matrix& matView, const Matrix&
 matProj
        , 
const Material&
 material)
    {
        ptrDevice
->SetTransform(D3DTS_WORLD, (CONST D3DMATRIX*)&
matWorld);
        ptrDevice
->SetTransform(D3DTS_VIEW, (CONST D3DMATRIX*)&
matView);
        ptrDevice
->SetTransform(D3DTS_PROJECTION, (CONST D3DMATRIX*)&
matProj);

        ptrDevice
->SetFVF(D3DFVF_XYZ |
 D3DFVF_NORMAL);
        ptrDevice
->
SetRenderState(D3DRS_FILLMODE, D3DFILL_SOLID);
        ptrDevice
->SetMaterial((CONST D3DMATERIAL9*)&
material);

        
uint nPrim = (uint)listVertex.size() / 3
;
        
uint nBatch = nPrim /
 _D3DCaps.MaxPrimitiveCount;
        
uint nByteBatch =_D3DCaps.MaxPrimitiveCount * (uint)sizeof(VertexXYZ_N) * 3
;

        
for (uint idx = 0; idx < nBatch ; ++
idx)
            ptrDevice
->
DrawPrimitiveUP(D3DPT_TRIANGLELIST
            , _D3DCaps.MaxPrimitiveCount
            , 
&
listVertex.front()
            
+ idx *
 nByteBatch
            , (
uint)sizeof
(VertexXYZ_N));

        ptrDevice
->DrawPrimitiveUP(D3DPT_TRIANGLELIST, nPrim %
 _D3DCaps.MaxPrimitiveCount
            , 
&
listVertex.front()
            
+ nBatch *
 nByteBatch
            , (
uint)sizeof
(VertexXYZ_N));

        
return 0
;
    }

    
virtual int SetLights(const Lights&
 lights)
    {
        ptrDevice
->
SetRenderState(D3DRS_AMBIENT
            , (lights.globalLight.GetColor()
            
*
 lights.globalLight.GetIntensity()).ToColor());

        
uint idxLight = 0
;
        
for (size_t idx = 0; idx < lights.listPointLight.size(); ++
idx)
        {
            
const PointLight& refLight =
 lights.listPointLight[idx];
            D3DLIGHT9 lght;
            ::memset(
&lght, 0sizeof
(D3DLIGHT9));
            lght.Type 
=
 D3DLIGHT_POINT;
            lght.Range 
=
 refLight.GetDistance();
            lght.Attenuation1 
= 1.0f
;

            Vector3 vPos 
=
 refLight.GetPosition();
            lght.Position.x 
=
 vPos.x;
            lght.Position.y 
=
 vPos.y;
            lght.Position.z 
=
 vPos.z;

            lght.Diffuse 
=
 lght.Specular
                
= *(D3DCOLORVALUE*)&(refLight.GetColor() *
 refLight.GetIntensity());

            ptrDevice
->SetLight(idxLight, &
lght);
            ptrDevice
->LightEnable(idxLight++true
);
        }

        
for (size_t idx = 0; idx < lights.listParallelLight.size(); ++
idx)
        {
            
const ParallelLight& refLight =
 lights.listParallelLight[idx];
            D3DLIGHT9 lght;
            ::memset(
&lght, 0sizeof
(D3DLIGHT9));
            lght.Type 
=
 D3DLIGHT_DIRECTIONAL;

            Vector3 vDir 
=
 refLight.GetDirection();
            lght.Direction.x 
=
 vDir.x;
            lght.Direction.y 
=
 vDir.y;
            lght.Direction.z 
=
 vDir.z;

            lght.Diffuse 
=
 lght.Specular
                
= *(D3DCOLORVALUE*)&(refLight.GetColor() *
 refLight.GetIntensity());

            ptrDevice
->SetLight(idxLight, &
lght);
            ptrDevice
->LightEnable(idxLight++true
);
        }

        
for (size_t idx = 0; idx < lights.listSpotLight.size(); ++
idx)
        {
            
const SpotLight& refLight =
 lights.listSpotLight[idx];
            D3DLIGHT9 lght;
            ::memset(
&lght, 0sizeof
(D3DLIGHT9));
            lght.Type 
=
 D3DLIGHT_SPOT;
            lght.Range 
=
 refLight.GetDistance();
            lght.Attenuation1 
= 1.0f
;
            lght.Falloff 
= 1.0f
;
            lght.Theta 
=
 refLight.GetHotspot().ToRadian();
            lght.Phi 
=
 refLight.GetFalloff().ToRadian();

            Vector3 vDir 
=
 refLight.GetDirection();
            lght.Direction.x 
=
 vDir.x;
            lght.Direction.y 
=
 vDir.y;
            lght.Direction.z 
=
 vDir.z;

            Vector3 vPos 
=
 refLight.GetPosition();
            lght.Position.x 
=
 vPos.x;
            lght.Position.y 
=
 vPos.y;
            lght.Position.z 
=
 vPos.z;

            lght.Diffuse 
=
 lght.Specular
                
= *(D3DCOLORVALUE*)&(refLight.GetColor() *
 refLight.GetIntensity());

            ptrDevice
->SetLight(idxLight, &
lght);
            ptrDevice
->LightEnable(idxLight++true
);
        }

        
return 0
;
    }

D3DPipeline:

virtual   int  ProcessingObject( const  Object3D &   object )
    {
        
++ _DebugInfo.dynamic_object_counter;

        
const  Model &  refModel  =   object .GetModel();
        
const  Vector3 &  pos  =   object .GetPosition();

        Matrix mat 
=   object .GetTransform()
            
*   object .GetOrientation().ObjectToInertial()  *   object .GetAxis()
            
*  Matrix().BuildTranslation(pos.x, pos.y, pos.z);

        
for  (size_t gidx  =   0 ; gidx  <  refModel.listGeometry.size();  ++ gidx)
        {
            
const  Geometry &  refGeom  =  refModel.listGeometry[gidx];
            
const  Material &  refMat  =  refModel.listMaterial[refGeom.indexMaterial];

            
// Triangle triangle;
            
// triangle.bitmap = (DeviceBitmap*)&refModel.listDeviceBitmap[refGeom.indexDeviceBitmap];

            std::vector
< VertexXYZ_N >  listVertex;
            listVertex.reserve(refGeom.listIndex.size());

            
for  (size_t iidx  =   0 ; iidx  <  refGeom.listIndex.size(); iidx  +=   3 )
            {
                
const  Vector3 &  vertex0  =  refGeom.listVertex[refGeom.listIndex[iidx]];
                
const  Vector3 &  vertex1  =  refGeom.listVertex[refGeom.listIndex[iidx  +   1 ]];
                
const  Vector3 &  vertex2  =  refGeom.listVertex[refGeom.listIndex[iidx  +   2 ]];

                Vector3 normal0 
=  refGeom.listNormal[iidx];
                Vector3 normal1 
=  refGeom.listNormal[iidx  +   1 ];
                Vector3 normal2 
=  refGeom.listNormal[iidx  +   2 ];

                listVertex.push_back(VertexXYZ_N());
                VertexXYZ_N
&  refV0  =  listVertex.back();
                refV0.x 
=  vertex0.x;
                refV0.y 
=  vertex0.y;
                refV0.z 
=  vertex0.z;
                refV0.normal_x 
=  normal0.x;
                refV0.normal_y 
=  normal0.y;
                refV0.normal_z 
=  normal0.z;

                listVertex.push_back(VertexXYZ_N());
                VertexXYZ_N
&  refV1  =  listVertex.back();
                refV1.x 
=  vertex1.x;
                refV1.y 
=  vertex1.y;
                refV1.z 
=  vertex1.z;
                refV1.normal_x 
=  normal1.x;
                refV1.normal_y 
=  normal1.y;
                refV1.normal_z 
=  normal1.z;

                listVertex.push_back(VertexXYZ_N());
                VertexXYZ_N
&  refV2  =  listVertex.back();
                refV2.x 
=  vertex2.x;
                refV2.y 
=  vertex2.y;
                refV2.z 
=  vertex2.z;
                refV2.normal_x 
=  normal2.x;
                refV2.normal_y 
=  normal2.y;
                refV2.normal_z 
=  normal2.z;

                
++ _DebugInfo.polygon_counter;
            }

            _PtrRender
-> DrawPrimitive(listVertex, mat, _ViewMatrix, _PerspectiveMatrix, refMat);
        }

        
return   0 ;
    }
};

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
This book describes the Direct3D graphics pipeline, from presentation of scene data to pixels appearing on the screen. The book is organized sequentially following the data °ow through the pipeline from the application to the image displayed on the monitor. Each major section of the pipeline is treated by a part of the book, with chapters and subsections detailing each discrete stage of the pipeline. This section summarizes the contents of the book. Part I begins with a review of basic concepts used in 3D computer graphics and their representations in Direct3D. The IDirect3D9 interface is introduced and device selection is described. The IDirect3DDevice9 interface is introduced and an overview of device methods and internal state is given. Finally, a basic framework is given for a 2D application. Chapter 1 begins with an overview of the entire book. A review is given of display technology and the important concept of gamma correction. The representation of color in Direct3D and the macros for manipulating color values are described. The relevant mathematics of vectors, geometry and matrices are reviewed and summarized. A summary of COM and the IUnknown interface is COM: Component Object Model given. Finally, the coding style conventions followed in this book are presented along with some useful C++ coding techniques. Chapter 2 describes the Direct3D object. Every application instantiates this object to select a device from those available. Available devices advertise their location in the Win32 virtual desktop and their capabilities to applications 34 CHAPTER 1. INTRODUCTION through the Direct3D object. Selecting a device from those available and exam- ining a device's capabilities are described. Multiple monitor considerations are also discussed. Chapter 3 describes the Direct3D device object which provides access to the rendering pipeline. The device is the interface an application will use most often and it has a large amount of internal state that controls every stage of the rendering pipeline. This chapter provides a high-level overview of the device and its associated internal state. Detailed discussion of the device state appears throughout the rest of the book. Chapter 4 describes the basic architecture of a typical Direct3D application. Every 3D application can use 2D operations for manipulating frame bu®er con- tents directly. An application can run in full-screen or windowed modes and the di®erences are presented here. The handling of Windows messages and a ba- sic display processing loop are presented. At times it may be convenient to use GDI in a Direct3D application window and a method for mixing these two Win- dows subsystems is presented. Almost every full-screen application will want to use the cursor management provided by the device. Device color palettes and methods for gamma correction are presented. Part II describes the geometry processing portion of the graphics pipeline. The application delivers scene data to the pipeline in the form of geometric primitives. The pipeline processes the geometric primitives through a series of stages that results in pixels displayed on the monitor. This part describes the start of the pipeline where the processing of geometry takes place. Chapter 5 describes how to construct a scene representing the digital world that is imaged by the imaginary camera of the device. A scene consists of a collection of models drawn in sequence. Models are composed of a collection of graphic primitives. Graphic primitives are composed from streams of vertex and index data de¯ning the shape and appearance of objects in the scene. Vertices and indices are stored in resources created through the device. Chapter 6 covers vertex transformations, vertex blending and user-de¯ned clipping planes. With transformations, primitives can be positioned relative to each other in space. Vertex blending, also called \skinning", allows for smooth mesh interpolation. User-de¯ned clipping planes can be used to provide cut away views of primitives. Chapter 7 covers viewing with a virtual camera and projection onto the viewing plane which is displayed as pixels on the monitor. After modeling, objects are positioned relative to a camera. Objects are then projected from 3D camera space into the viewing plane for conversion into 2D screen pixels. Chapter 8 describes the lighting of geometric primitives. The lighting model is introduced and the supported shading algorithms and light types are de- scribed. Chapter 9 covers programmable vertex shading. Programmable vertex shaders can process the vertex data streams with custom code, producing a single ver- tex that is used for rasterization. The vertex shading machine architecture and instruction set are presented. Part III covers the rasterization portion of the pipeline where geometry is1.1. OVERVIEW 5 converted to a series of pixels for display on the monitor. Geometric primitives are lit based on the lighting of their environment and their material properties. After light has been applied to a primitive, it is scan converted into pixels for processing into the frame bu®er. Textures can be used to provide detailed surface appearance without extensive geometric modeling. Pixel shaders can be used to provide custom per-pixel appearance processing instead of the ¯xed- function pixel processing provided by the stock pipeline. Finally, the pixels generated from the scan conversion process are incorporated into the render target surface by the frame bu®er. Chapter 10 describes the scanline conversion of primitives into pixel frag- ments. Lighting and shading are used to process vertex positions and their associated data into a series of pixel fragments to be processed by the frame bu®er. Chapter 11 describes textures and volumes. Textures provide many e±cient per-pixel e®ects and can be used in a variety of manners. Volumes extend texture images to three dimensions and can be used for a volumetric per-pixel rendering e®ects. Chapter 13 describes programmable pixel shaders. Programmable pixel shaders combine texture map information and interpolated vertex information to produce a source pixel fragment. The pixel shading machine architecture and instruction set are presented. Chapter 14 describes how fragments are processed into the frame bu®er. After pixel shading, fragments are processed by the fog, alpha test, depth test, stencil test, alpha blending, dither, and color channel mask stages of the pipeline before being incorporated into the render target. A render target is presented for display on the monitor and video scan out. Part IV covers the D3DX utility library. D3DX provides an implementation of common operations used by Direct3D client programs. The code in D3DX consists entirely of client code and no system components. An application is free to reimplement the operations provided by D3DX, if necessary. Chapter 15 introduces D3DX and summarizes features not described else- where. Chapter 16 describes the abstract data types provided by D3DX. D3DX provides support for RGBA color, point, vector, plane, quaternion, and matrix data types. Chapter 17 describes the helper COM objects provided by D3DX. D3DX provides a matrix stack object to assist in rendering frame hierarchies, a font object to assist in the rendering of text, a sprite object to assist in the rendering of 2D images, an object to assist in rendering to a surface or an environment map and objects for the rendering of special e®ects. Chapter 19 describes the mesh objects provided by D3DX. The mesh objects provided by D3DX encompass rendering of indexed triangle lists as well as progressive meshes, mesh simpli¯cation and skinned meshes. Chapter 21 describes the X ¯le format with the ¯le extension .x. The X ¯le format provides for extensible hierarchical storage of data objects with object instancing.6 CHAPTER 1. INTRODUCTION Part V covers application level considerations. This part of the book de- scribes issues that are important to applications but aren't strictly part of the graphics pipeline. Debugging strategies for applications are presented. Almost all Direct3D applications will be concerned with performance; API related per- formance issues are discussed here. Finally, installation and deployment issues for Direct3D applications are discussed. Chapter 22 describes debugging strategies for Direct3D applications. This includes using the debug run-time for DirectX 9.0c, techniques for debugging full-screen applications and remote debugging. Chapter 23 covers application performance considerations. All real devices have limitations that a®ect performance. A general consideration of how the pipeline state a®ects performance is given. Chapter 24 covers application installation and setup. Appendix A provides a guided tour of the DirectX SDK materials.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值