opengl superbible 5 edition chapter 9——Advanced Buffers: Beyond the Basics

By now framebuffer objects are old hat. We can use the flexibility provided by FBOs, textures, and buffer objects to really push the OpenGL pipeline. So far most of our work has been with traditional 8-bit color textures and renderbuffers. Even depth buffers mapped all values to 24 or 32 bits of physical fixed-point range. New data formats open a whole new world, allowing your application to store the actual output of the fragment shader without loss of precision. The fun doesn’t stop there. OpenGL also provides many ways of accessing and updating buffers on the GPU without bringing rendering to a grinding halt 停止.

getting at your data
most of this chapter focuses on all the new data formats and ways to use them . but before we get to that, let us build on what we learned in chapter 8, “buffer objects: storage is now in your hands”, and cover a few important ways of accessing your buffers that will help you to optimize your performance.

mapping buffers
in the previous chapter, u uploaded buffer objects using glBufferData once to fill the buffer. but what if u have to make changes or update the buffer after it has been loaded on the gpu??? well that is what glMapBuffer and glMapBufferRange are for. when u call glMapBufferRange, opengl provides a pointer to memory that u can use to directly read or update the data in buffer. all u have to do is tell the implementation what u are planning on doing with the data. u can choose to only read from the mapped buffer in cases where the gpu has written to the bfufer and u want to bring the results back to the cpu. or u can map the buffer for writing in which case your changes are reflected on the buffer stored in gpu memory. the type of mapping you choose will have performance implications: try to avoid mapping a buffer for writing when u only need to read from it. likewise, do not map a buffer for reading if u are only going to write to it. table 9.1 shows the possible bitfield values for mapping buffers.

在这里插入图片描述

when done updating the mapped buffer, call glUnmapBuffer to tell opengl u are finished.

Glint accessFlags = GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_RANGE_BIT | GL_MAP_FLUSH_EXPLICIT_BIT;
Glint offset = 32* 100;
Glint length = 32*48;
GLvoid *bufferData = glMapBufferRange(GL_TEXTURE_BUFFER, offset, length, access- Flags);
// Update buffer here
. . .
glUnmapBuffer(GL_TEXTURE_BUFFER);

if u set the GL_MAP_FLUSH_EXPLICIT_FLAG, u have to tell opengl which portions of the buffer u want flushed, or which portions u updated by calling glFlushMappedBufferRange before unmapping the buffer. u can call glFlushMappedBufferRange as many times as u need for as many ranges as u updated:

GLvoid glFlushMappedBufferRange(GLenum target, intprt offset, sizeiptr length);

use the same target the buffer is bound to. the offset and length parameters are used to signal which portion of the buffer was changed.

u an also map an entire buffer by calling glMapBuffer instead of glMapBufferRange.

GLvoid *bufferData = glMapBuffer(GL_TEXTURE_BUFFER, accessFlags);

u ues glMapBuffer and glMapBufferRange extensively in the reset of the book to load and update data on the gpu.

copying buffers
once your data has been sent to the gpu, it is entirely possible u many want to share that data between buffers or copy the results from one buffer into another. thankfully, OpenGL provides an easy to use way of doing that as well. glCopyBufferSubData lets you specify which buffers are involved as well as the size and offsets to use.

glCopyBufferSubData(GL_COPY_READ_BUFFER, GL_COPY_WRITE_BUFFER, readStart, writeStart, size);

The buffers you are copying to and from can be any buffers bound to any of the buffer binding points listed in Table 8.1 back in Chapter 8. But since buffer binding points can only have one buffer bound at a time, you couldn’t copy between two buffers both bound to GL_TEXTURE_BUFFER, for example. The creators of OpenGL thought of this too! Remember the GL_COPY_READ_BUFFER and GL_COPY_WRITE_BUFFER you first saw in Chapter 8 but haven’t used for anything yet? Well these binding points were added specifically for you to copy data from one buffer to another. You can bind your read and write buffers to these binding points without affecting any other buffer bindings. Then pick the offsets into each buffer and specify the size.

Be sure that the ranges you are reading from and writing to remain within the size of the buffers; otherwise your copy will fail. glCopyBufferSubData can be use for many clever algorithms. One common use is for an application to create a second thread with an OpenGL context used for loading data. In this case glCopyBufferSubData is quite handy for updating geometry data in the primary context without major interruptions rendering.

controlling the destiny 命运 of your pixel shades; mapping fragment outputs
in chapter 8 u learned how to hook up 连接 multiple buffer objects to a framebuffer and render many different outputs from the same fragment shader. to do this, your shader could write to the built-in shader outputs called gl_FragData[n] instead of gl_FragColor. although u can still compile a glsl 1.50 shader using either of these outputs, both are deprecated. that means furture versions of opengl will remove them, and we are better off using the “new and improved” way of writing shader color outputs.

using built-in shader outputs is so 2006! one problem with the old way is that u can write gl_FragData or gl_FragColor, but never both. also, Also, your fragment shader must contain hard-coded indexes if it renders to multiple outputs. Additionally, how are you supposed to keep track of and make logical sense of what is being written to gl_FragData[7] across multiple shaders?

Instead of setting the value of a built-in color output index, you can define your own shader outputs. For color outputs, declare your output as out vec4 in your fragment shader. The outputs for the Chapter 8 draw buffers sample program have been rewritten to use custom locations:

out vec4 oStraightColor;
out vec4 oGreyscale;
out vec4 oLumAdjColor;

Then before linking the program, tell OpenGL where you want to map the outputs by using glBindFragDataLocation. Just specify which index each output maps to:

glBindFragDataLocation(processProg, 0, “oStraightColor”);
glBindFragDataLocation(processProg, 1, “oGreyscale”);
glBindFragDataLocation(processProg, 2, “oLumAdjColor”);
glLinkProgram(processProg);

You can also compile your shaders, link your program together, and then specify the locations of your outputs. Just remember to relink the program again before you use it so that setting the output locations takes effect. Now your shader output is configured. Each color is written to a unique index. Remember that you can’t assign an output to more than one
index. The entire listing for the fragment shader of the draw buffers sample program from Chapter 8 is shown in Listing 9.1. Three color outputs are declared, and a different shading technique is used for each output.

LISTING 9.1 Fragment Shader for fbo_drawbuffers, multibuffer_frag_location.fs

#version 150
// multibuffer_frag_location.fs
// outputs to 3 buffers: normal color, grayscale, and luminance adjusted color
in vec4 vFragColor;
in vec2 vTex;
uniform sampler2D textureUnit0;
uniform int bUseTexture;
uniform samplerBuffer lumCurveSampler;
out vec4 oStraightColor;
out vec4 oGrayscale;
out vec4 oLumAdjColor;
void main(void) 
{
	vec4 vColor;
	vec4 lumFactor;
	if (bUseTexture != 0)
	vColor = texture(textureUnit0, vTex);
	else
	vColor = vFragColor;
	// Untouched output goes to first buffer
	oStraightColor = vColor;
	// Grayscale to second buffer
	float grey = dot(vColor.rgb, vec3(0.3, 0.59, 0.11));
	oGrayscale = vec4(gray, gray, gray, 1.0f);
	// clamp input color to make sure it is between 0.0 and 1.0
	vColor = clamp(vColor, 0.0f, 1.0f);
	int offset = int(vColor.r * 1024);
	oLumAdjColor.r = texelFetch(lumCurveSampler, offset ).r;
	offset = int(vColor.g * 1024);
	oLumAdjColor.g = texelFetch(lumCurveSampler, offset ).r;
	offset = int(vColor.b * 1024);
	oLumAdjColor.b = texelFetch(lumCurveSampler, offset ).r;
	oLumAdjColor.a = 1.0f;
}

There are many advantages to using glBindFragDataLocation. You can use logical names for outputs in shaders that actually have a meaning. You can also use the same name in multiple shaders and map that name to the appropriate logical buffer index at runtime.
We take a deeper look into how your application can use blending in Chapter 10, “Fragment Operations: The End of the Pipeline.” Some blending equations in OpenGL 3.3

require a shader to output two different colors per fragment. You can use glBindFragDataLocationIndexed to do this.
glBindFragDataLocationIndexed(program, colorNumber. index, outputName);

This function behaves similarly to glBindFragDataLocation. In OpenGL 3.3 there are two possible index values for the index parameter. If you choose 0, the color will be used as the first input color, just as if you had used glBindFragDataLocation. If you use 1, the color will be used as the second input color for blending.

new formats for a new hardware generation
one way opengl has progressed in the past few years is to add native support for a slew 大量的 of new data formats and data types. The writers of the OpenGL standard continue to bring flexibility to 3D application development—first with completely customizable sections of
the graphics pipeline, next with flexible buffer usage, and now finally with flexible data formats.

At first such an idea might seem trivial or unimportant. But anyone who has spent time trying to express all of their color data in 8 bits can sympathize. Most data that enters the OpenGL rendering pipeline has come from some other application or tool. Vertex and texture data for most games come from artistic authoring tools such as Maya or 3DS Max. CAD programs use complex engines to generate 3D surfaces based on user input and file formats. Because vertex, texture, and related data can be large and complex, it can be nearly impossible to convert all of this data from various sources to a small set of formats. But conversion is usually unnecessary with OpenGL now that the most common and many uncommon formats are supported natively.

Floats—True Precision at Last!
One of the most useful additions is floating-point formats. Although internally the OpenGL pipeline usually works with floating-point data, the sources and targets have often been fixed-point and of significantly less precision. As a result, many portions of the pipeline used to clamp all values between 0 and 1 so they could be stored in a fixed-point format in the end. While OpenGL 3.2 still allowed you to clamp the output of fragment shaders, OpenGL 3.3 has removed clamping altogether.

The data type passed into a vertex shader is up to you but is typically declared as vec4, or a vector of four floats. Similarly, you decide what outputs your vertex shader should write when you declare variables as out or varying in a vertex shader. These outputs are then
rasterized across your geometry and passed into your fragment shader. You have complete control of the type of data you decide to use for color throughout the whole pipeline, although it’s most common to just use floats. You now have complete control over how and in what format your data is in as it travels from vertex arrays all the way to the final output.

This is great! Now instead of 256 values, you can color and shade using values from 1.18 * 10-38 all the way to 3.4 * 1038! (Negative colors just wouldn’t make sense.) But wait, if you are drawing to a window that only has 8 bits per color, what happens? Unfortunately, the output is clamped to the range of 0 to 1 and then mapped to a fixed point value. That’s no fun! Until someone invents monitors or displays that can understand and display floating- point data, you are still limited by the final output device.

But that doesn’t mean floating-point rendering isn’t useful. Quite the contrary! You can still render to both textures and renderbuffers in full floating-point precision. Not only that, but you have complete control over how floating-point data gets mapped to a fixed output format. This can have a huge impact on the final result and is commonly referred to High Dynamic Range, or HDR.

Using Floating-Point Formats
Upgrading your applications to use floating-point buffers is easier than you may think. In fact, you don’t even have to call any new functions. Instead, there are two new tokens you can use when creating buffers, GL_RGBA16F and GL_RGBA32F. These can be used when creating storage for RBOs (renderbuffer objects) or when allocating textures:

glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA16F, nWidth, nHeight);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA32F, nWidth, nHeight);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, texWidth, texHeight, 0, GL_RGBA, GL_FLOAT, texels);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, texWidth, texHeight, 0, GL_RGBA, GL_FLOAT, texels);

In addition to the more traditional RGBA formats, Table 9.2 lists other formats allowed for creating floating-point renderbuffers. Textures are more open-minded and can be created with far more formats, but only two of those are float formats. Remember what we said earlier about OpenGL being flexible to allow many different applications to work easily? Having so many floating-point formats available allows applications to often use the format their data is stored in directly without first converting, which can be very time-consuming.

TABLE 9.2 Float Renderbuffer and Texture Formats
在这里插入图片描述

HDR
Many modern game applications use floating-point rendering to generate all of the great
eye candy we now expect. The level of realism possible when generating lighting effects
such as light bloom, lens flare, light reflections, light refractions, crepuscular rays, and the
effects of participating media such as dust or clouds are often not possible without floating-
point buffers. HDR rendering to floating-point buffers can make the bright areas of a
scene really bright, keep shadow areas very dark, and still allow you to see detail in both.
After all, the human eye has an incredible ability to perceive very high contrast levels well
beyond the capabilities of today’s displays.
Instead of drawing a complex scene with a lot of geometry and lighting in our sample
programs to show how effective HDR can be, we use images already generated in HDR for
simplicity. The first sample program, hdr_imaging, loads HDR (floating-point) images
using a file format called OpenEXR. Industrial Light and Magic developed OpenEXR as a
tool to help store all of the image data necessary for high fidelity image processing. Think
of an OpenEXR image as a composite of multiple images captured by a camera at different
exposure levels. The low exposures capture detail in the bright areas of the scene while the
high exposures capture detail in the dark areas of the scene. Figure 9.1 (also shown in
Color Plate 14) shows three views of a scene with a tree in the foreground and a bright
field in the background. The left side rendered at a very low exposure and shows all of the
detail of the field even though it is very bright. The center image begins to show the foreground,
trunk, and the leaves of the closest tree. The right image really brings out the
detail of the ground in front of the tree and even lets you see inside the hollow base of the
tree! The three images show the incredible amount of detail and range that are stored in a
single image. OpenEXR comes with sample images we can use to demonstrate HDR
rendering.
在这里插入图片描述

The only way possible to store so much detail in a single image is to use floating-point
data. Any scene you render in OpenGL, especially if it has very bright or dark areas, can
look more realistic when the true color output can be preserved instead of clamped
between 0.0 and 1.0, and then divided into only 256 possible values.

Using OpenEXR
Because OpenEXR is a custom data format, we can’t use ordinary file access methods for
reading and interpreting the data. Thankfully Industrial Light and Magic has provided the
libraries necessary to do all the heavy lifting for us. By including a few OpenEXR header
files and linking against the OpenEXR lib files, we can use the already built tools to load
images. OpenEXR treats all access to EXR files as “windows” or “views” of the data
contained in the file. In our application, first we create an RGBAInputFile object by
passing the constructor the name of the file we want to open. Next, we get the width and
height of the OpenEXR image by creating a Box2i object and filling it with the strongly
typed data returned from a call to dataWindow. Then the width and height are used to
create a 2D array of pixels containing RGBA data:

Array2D<Rgba> pixels;
Box2i dw = file.dataWindow();
texWidth = dw.max.x - dw.min.x + 1;
texHeight = dw.max.y - dw.min.y + 1;
pixels.resizeErase (texHeight, texWidth);

After the file is opened and we have a place to store the data, we have to tell the RgbaInputFile object where we want to put the data by calling setFrameBuffer and then read the actual data by calling readPixels:

file.setFrameBuffer (&pixels[0][0] - dw.min.x - dw.min.y * texWidth, 1, texWidth);
file.readPixels (dw.min.y, dw.max.y);

Now that we have the data, it’s time to load it into a texture. But first the data needs to be in a layout that OpenGL understands. The data must be copied to an array of floats:

GLfloat* texels = (GLfloat*)malloc(texWidth * texHeight * 3 * sizeof(GLfloat));
GLfloat* pTex = texels;
// Copy OpenEXR into local buffer for loading into a texture
for (unsigned int v = 0; v < texHeight; v++)
{
	for (unsigned int u = 0; u < texWidth; u++)
	{
		Imf::Rgba texel = pixels[texHeight - v - 1][u];
		pTex[0] = texel.r;
		pTex[1] = texel.g;
		pTex[2] = texel.b;
		pTex += 3;
	}
}

Then, finally load the array of floats into the designated texture object:

// Bind texture, load image, set tex state
glBindTexture(GL_TEXTURE_2D, textureName);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, texWidth, texHeight, 0, GL_RGB, GL_FLOAT,
texels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
free(texels);

That’s it! Now the HDR image data is loaded into an OpenGL texture image and is ready for use.

Tone Mapping
Now that you’ve seen some of the benefits of using floating-point rendering, how do you
use that data to generate a dynamic image that still has to be displayed using values from
0 to 255? Tone mapping is the action of mapping color data from one set of colors to
another or from one color space to another. Because we can’t directly display floatingpoint
data, it has to be tone mapped into a color space that can be displayed.
The first sample program, hdr_imaging, uses three approaches to map the high-definition
output to the low-definition screen. The first method, enabled by pressing the 1 key, is a
simple and naïve direct texturing of the floating-point texture to the screen. The
histogram in Figure 9.2 shows that most of the image data is between 0 and 1, but many
of the important highlights are well beyond 1.0. In fact, the highest luminance level for
this image is 9.16!

在这里插入图片描述

The result is that the image is clamped, and all of the bright areas look white. Additionally, because the majority of the data is in the lower one-fourth of the range, or between 0 and 63 when mapped directly to 8 bits, it all blends together to look black. Figure 9.3 shows the result; the bright areas are practically white, and the dark areas are nearly black.

在这里插入图片描述

The second approach in the sample program is to vary the “exposure” of the image,
similar to how a camera can vary exposure to the environment. Enter this mode by pressing
2. Each exposure level provides a slightly different window into the texture data. Low
exposures show the detail in the very bright sections of the scene; high exposures allow
you to see detail in the dark areas but wash out the bright parts. This is similar to the
images in Figure 9.1 with the low exposure on the left and the high exposure on the right.
For our tone mapping pass, the hdr_imaging sample program reads from a floating-point
texture and writes to a framebuffer object with an 8-bit texture attached to the first render
target. This allows the conversion from HDR to LDR (Low Dynamic Range) to be on a
pixel by pixel basis, which reduces artifacts that occur when a texel is interpolated
between bright and dark areas. Once the LDR image has been generated, it is drawn
directly to the screen as a texture. Listing 9.2 shows the setup of the FBO and textures as
well as the rendering pass to do the conversion.

LISTING 9.2 Rendering HDR Content to an FBO and then to the Window

// Create and bind an FBO
glGenFramebuffers(1,&fboName);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboName);
// Create the FBO texture
glGenTextures(1, fboTextures);
glBindTexture(GL_TEXTURE_2D, fboTextures[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, hdrTexturesWidth[curHDRTex],
hdrTexturesHeight[curHDRTex], 0, GL_RGBA, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, fboTextures[0], 0);
. . .
// Setup HDR texture(s)
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, hdrTextures);
glBindTexture(GL_TEXTURE_2D, hdrTextures[curHDRTex]);
// Load HDR image from EXR file
LoadOpenEXRImage(“Tree.exr”, hdrTextures[curHDRTex],
hdrTexturesWidth[curHDRTex], hdrTexturesHeight[curHDRTex]);
. . .
// first, draw to FBO at full FBO resolution
// Bind FBO with 8b attachment
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboName);
glViewport(0, 0, hdrTexturesWidth[curHDRTex], hdrTexturesHeight[curHDRTex]);
glClear(GL_COLOR_BUFFER_BIT);
// Bind texture with HDR image
glBindTexture(GL_TEXTURE_2D, hdrTextures[curHDRTex]);
// Render pass, down-sample to 8b using selected program
projectionMatrix.LoadMatrix(fboOrthoMatrix);
SetupHDRProg();
fboQuad.Draw();
// Then draw the resulting image to the screen, maintain image proportions
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glViewport(0, 0, screenWidth, screenHeight);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
// Attach 8b texture with HDR image
glBindTexture(GL_TEXTURE_2D, fboTextures[0]);
// draw screen sized, proportional quad with 8b texture
projectionMatrix.LoadMatrix(orthoMatrix);
SetupStraightTexProg();
screenQuad.Draw();

The code in Listing 9.2 looks similar to other sample programs we have seen before. The
magic sauce is in the fragment shader that does the actual conversion. Listing 9.3 contains
the source of the fragment shader that performs the conversion based on exposure. You
can use the up and down keys to adjust the exposure once the program is in the variable
exposure mode. The range of exposures for this program goes from 0.01 to 20.0. Notice
how the level of detail in different locations in the image changes with the exposure level.

LISTING 9.3 hdr_exposure.fs Fragment Shader for HDR to LDR Conversion

#version 150
// hdr_exposure.fs
// Scale floating point texture to 0.0 - 1.0 based
// on the specified exposure
//
in vec2 vTexCoord;
uniform sampler2D textureUnit0;
uniform float exposure;
out vec4 oColor;
void main(void)
{
	// fetch from HDR texture
	vec4 vColor = texture(textureUnit0, vTexCoord);
	// Apply the exposure to this texel
	oColor = 1.0 - exp2(-vColor * exposure);
	oColor.a = 1.0f;
}

The last tone mapping shader used in the first sample program performs dynamic adjustments
to the exposure level based on the relative brightness of different portions of the
scene. First, the shader needs to know the relative luminance of the area near the current
texel being tone mapped. The shader does this by sampling a 5 x 5 matrix with the
current texel in the center. All the surrounding samples are then weighted and added
together. The final summed color is converted to a luminance value. The sample program uses a lookup table to convert the luminance to an exposure. The exposure is then used to
convert the HDR texel to an LDR value. Listing 9.4 shows the adaptive HDR shader.

LISTING 9.4 hdr_adaptive Fragment Shader for Adaptive Exposure Levels in HDR to LDR Conversion

#version 150
// hdr_adaptive.fs
//
//
in vec2 vTex;
uniform sampler2D textureUnit0;
uniform sampler1D textureUnit1;
uniform vec2 tc_offset[25];
out vec4 oColor;
void main(void)
{
vec4 hdrSample[25];
for (int i = 0; i < 25; i++)
{
// Perform 25 lookups around the current texel
hdrSample[i] = texture(textureUnit0, vTex.st + tc_offset[i]);
}
// Calculate weighted color of region
vec4 vColor = hdrSample[12];
vec4 kernelcolor = (
(1.0 * (hdrSample[0] + hdrSample[4] + hdrSample[20] + hdrSample[24])) +
(4.0 * (hdrSample[1] + hdrSample[3] + hdrSample[5] + hdrSample[9] +
hdrSample[15] + hdrSample[19] + hdrSample[21] + hdrSample[23])) +
(7.0 * (hdrSample[2] + hdrSample[10] + hdrSample[14] + hdrSample[22])) +
(16.0 * (hdrSample[6] + hdrSample[8] + hdrSample[16] + hdrSample[18])) +
(26.0 * (hdrSample[7] + hdrSample[11] + hdrSample[13] + hdrSample[17])) +
(41.0 * hdrSample[12])
) / 273.0;
// Calculate the luminance for the whole filter kernel
float kernelLuminance = dot(kernelcolor.rgb, vec3(0.3, 0.59, 0.11));
// look up the corresponding exposure
float exposure = texture(textureUnit1, kernelLuminance/2.0).r;
exposure = clamp(exposure, 0.02f, 20.0f);
// Apply the exposure to this texel
oColor = 1.0 - exp2(-vColor * exposure);
oColor.a = 1.0f;
}

When using one exposure for an image, you can adjust for the best results by taking the
range for the whole and using an average. Considerable detail is still lost with this
approach in the bright and dim areas. The lookup table used with the adaptive fragment
shader brings out the detail in both the bright and dim areas of the image; take a look at
Figure 9.4. The lookup table uses a logarithmic-like scale to map luminance values to
exposure levels. You can change this table to increase or decrease the range of exposures
used and the resulting amount of detail in different dynamic ranges.

在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
ava实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),可运行高分资源 Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现
C语言是一种广泛使用的编程语言,它具有高效、灵活、可移植性强等特点,被广泛应用于操作系统、嵌入式系统、数据库、编译器等领域的开发。C语言的基本语法包括变量、数据类型、运算符、控制结构(如if语句、循环语句等)、函数、指针等。下面详细介绍C语言的基本概念和语法。 1. 变量和数据类型 在C语言中,变量用于存储数据,数据类型用于定义变量的类型和范围。C语言支持多种数据类型,包括基本数据类型(如int、float、char等)和复合数据类型(如结构体、联合等)。 2. 运算符 C语言中常用的运算符包括算术运算符(如+、、、/等)、关系运算符(如==、!=、、=、<、<=等)、逻辑运算符(如&&、||、!等)。此外,还有位运算符(如&、|、^等)和指针运算符(如、等)。 3. 控制结构 C语言中常用的控制结构包括if语句、循环语句(如for、while等)和switch语句。通过这些控制结构,可以实现程序的分支、循环和多路选择等功能。 4. 函数 函数是C语言中用于封装代码的单元,可以实现代码的复用和模块化。C语言中定义函数使用关键字“void”或返回值类型(如int、float等),并通过“{”和“}”括起来的代码块来实现函数的功能。 5. 指针 指针是C语言中用于存储变量地址的变量。通过指针,可以实现对内存的间接访问和修改。C语言中定义指针使用星号()符号,指向数组、字符串和结构体等数据结构时,还需要注意数组名和字符串常量的特殊性质。 6. 数组和字符串 数组是C语言中用于存储同类型数据的结构,可以通过索引访问和修改数组中的元素。字符串是C语言中用于存储文本数据的特殊类型,通常以字符串常量的形式出现,用双引号("...")括起来,末尾自动添加'\0'字符。 7. 结构体和联合 结构体和联合是C语言中用于存储不同类型数据的复合数据类型。结构体由多个成员组成,每个成员可以是不同的数据类型;联合由多个变量组成,它们共用同一块内存空间。通过结构体和联合,可以实现数据的封装和抽象。 8. 文件操作 C语言中通过文件操作函数(如fopen、fclose、fread、fwrite等)实现对文件的读写操作。文件操作函数通常返回文件指针,用于表示打开的文件。通过文件指针,可以进行文件的定位、读写等操作。 总之,C语言是一种功能强大、灵活高效的编程语言,广泛应用于各种领域。掌握C语言的基本语法和数据结构,可以为编程学习和实践打下坚实的基础。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值