In my HLSL pixel shader, SV_POSITION seems to have different values to any other semantic I use. I don't understand why this is. Can you please explain it?
For example, I am using a triangle with the following coordinates:
(0.0f, 0.5f)
(0.5f, -0.5f)
(-0.5f, -0.5f)
The w and z values are 0 and 1, respectively.
This is the pixel shader.
struct VS_IN
{
float4 pos : POSITION;
};
struct PS_IN
{
float4 pos : SV_POSITION;
float4 k : LOLIMASEMANTIC;
};
PS_IN VS( VS_IN input )
{
PS_IN output = (PS_IN)0;
output.pos = input.pos;
output.k = input.pos;
return output;
}
float4 PS( PS_IN input ) : SV_Target
{
// screenshot 1
return input.pos;
// screenshot 2
return input.k;
}
technique10 Render
{
pass P0
{
SetGeometryShader( 0 );
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetPixelShader( CompileShader( ps_4_0, PS() ) );
}
}
When I use the first statement (result is first screenshot), the one that uses the SV_POSITION
semantic, the result is completely unexpected and is yellow, whereas using any other semantic will produce the expected result. Why is this?
8,23623254
asked Nov 25 '12 at 12:09
133116
-
1
float4 k : LOLIMASEMANTIC;
:^) – Tara Dec 14 '16 at 15:52
add a comment
1 Answer
11
SV_Position gives you the position in screen coordinates, not in a [0,1] range though but basically in pixel coordinates. The range will correspond to the D3D11_VIEWPORT
you set, possibly something along the lines of:
D3D11_VIEWPORT viewport = {0};
viewport.Width = 1280;
viewport.Height = 720;
So in order to get a [0,1] range again, for the colors, you could do:
return float4(input.pos.r/1280, input.pos.g/720, 0, 1);
answered Nov 25 '12 at 16:30