lyo blog3D文章集锦

lyo blog关闭了,好文章做个备份

---------------------------------------

Skinned Mesh in M3G

Filed under: Programming

Skinned Mesh,就是根据skeleton去变换顶点,通过addTransform函数将Vertex和bone绑定,Specification中给出了计算式子:

Denote the set of nodes (bones) associated with a vertex by { N1, N2, …, NN }.
Denote by Mi the transformation from the local coordinate system of node Nito a reference coordinate system.
The choice of the reference coordinate system is not critical; depending on the implementation,
good choices may include the world coordinate system, the coordinate system of the SkinnedMesh node,
or the coordinate system of the current camera. Finally, let us denote the weight associated with node Ni as Wi.
The blended position of a vertex in the reference coordinate system is then:

v‘ = sum [ w i M i B i v ]

其中,

  • 0 <= i < N, 其中N 是和顶点v 关联的bone个数;
  • v 是VertexBuffer中初始的顶点位置;
  • Bi是"at rest"变换,即从SkinnedMesh到顶点bone的初始变换矩阵,可以用Node.getTransformTo(bone, transform)取得;
  • Mi当前顶点bone到参考坐标系的变换矩阵,我们取SkinnedMesh的坐标系,这样得到的是一个标准的Mesh,方便调用渲染;
  • wi is the normalized weight of bone Ni, computed as wi = Wi / (W1 + … + WN).

Bi在addTransform时,就可以得到,而Mi则需要在渲染时获取,bone.getTransformTo(skinnedMesh, Mi) ,
注意对应的Normal变换有点不一样,不是直接的用Mi ,而需要用对应的inverse transpose矩阵:

n‘ = sum [ wi (Mi * Bi)-1Tn ]

Morphing Mesh in M3G

Filed under: Programming

M3G Specification中已有了详细的说明,根据权值对Mesh做简单的变形:

Denoting the base mesh with B, the morph targets with Ti, and the weights corresponding to the morph targets with wi,
the resultant mesh R is computed as follows:

R = B + sum [ w i ( T i - B) ]

我们只要做简单的变化,

R = B + sum[ wi (Ti - B)]

= B + sum(wi * Ti ) - sum(wi * B)

= [1 - sum(wi )] * B + sum(wi * Ti )

这样就容易理解,根据上式也很容易编码。

==============

Kemulator中,是在每次渲染前,根据上式计算出对应的Mesh,如果顶点数比较多,则需要消耗一定的计算时间。
假如遇到有的手机对Morphing Mesh渲染比较慢,就可以考虑自己封装一个Morphing的实现,
当Mesh需要改变时才计算新的mesh,渲染时就不需要额外的计算了。

April 12, 2007

Pos3D to Pos2D in M3G

Filed under: Programming

to 大河马~~emoticon

Camera m_camera; //current camera
Transform m_camTransform; //current camera transform
Transform m_objTransform; //transform of the render obj

void Pos3D2Pos2D(float[] pos3D, float[]pos2D)
{
float pos[] = new float[]{pos3D[0], pos3D[1], pos3D[2], 1}

//get current position
m_objTransform.transform(pos);

//apply camera transform
Transform invTrans = new Transform(m_camTransform);
invTrans.invert();
invTrans.transform(pos);

//get z
float z = -pos[2];
float x = 0;
float y = 0;

//projection
Transform transProjection = new Transform();
camera.getProjection(transProjection);
transProjection.transform(pos);

// NDC to View
x = pos[0] * getWidth()/ (2 * z);
y = pos[1] * getHeight()/ (2 * z);

//convert to screen pos.
pos2D[0] = (int)(getWidth()/2 + x);
pos2D[1] = (int)(getHeight()/2 - y);

}

January 31, 2007

Camera In JSR184

Filed under: Programming

1.Camera类
— Lyo Wu

写完模拟器的M3G部分,早就想写点总结,不写怕要忘了。
那就从最简单的Camera类开始。

Camera类封装了3种投影变换:Generic、Parallel、Perspective。
1)Generic
public void setGeneric(Transform transform)
直接指定一个变换矩阵(Transform类其实就是一个矩阵的封装),可以根据需求任意设置投影矩阵。

2)Parallel
public void setParallel(float fovy, float aspectRatio, float near, float far)
平行投影(正射投影,Orthographic Projection),忽略z轴作用,投影后的物体大小尺寸不变。M3G Specification中已经给出了对应的矩阵(NDC坐标系):
| 2/w 0 0 0 |
| 0 2/h 0 0 |
| 0 0 -2/d -(near+far)/d |
| 0 0 0 1 |
其中,fovy - height of the view volume in camera coordinates。
h = height (= fovy)
w = aspectRatio * h
d = far - near

公式推导:

设(x, y, z, w)为视点坐标,(x’, y’, z’, w’)为投影坐标,(Px, Py, Pz, Pw)为NDC。
近平面距离为n,远平面距离为f,视口宽为w,高为h。
x’ = x
y’ = y
映射到NDC ([-1, 1]),
Px = 2 * x’/ w = 2 * x / w
Py = 2 * y’ / h = 2 * y / h
由于z坐标投影后不参与绘图,用于可见性判断,只要保证Pz与z呈线性关系,即Pz=a*z+b
将(-n,-1), (-f, 1) 代入得
a = -2 / (f - n)
b = -(n + f) / (f - n)
Pz = -2/ (f - n)*z - (n + f) / (f - n)
即等价于
| x || 2/ w 0 0 0 |
| y | | 0 2 / h 0 0 |
| z | * | 0 0 -2/ (f – n) -( n + f) / (f - n)|
| 1 | | 0 0 0 1 |

3)Perspective
public void setPerspective(float fovy, float aspectRatio, float near, float far)
透视投影,即离视点近的物体大,离视点远的物体小,符合人们心理习惯。
M3G Specification中已经给出了对应的矩阵(NDC坐标系):
| 1/w 0 0 0 |
| 0 1/h 0 0 |
| 0 0 -(near+far)/d -2*near*far/d |
| 0 0 -1 0 |
其中,fovy - field of view in the vertical direction, in degrees。
h = tan ( fovy/2 )
w = aspectRatio * h
d = far - near

公式推导:

设(x, y, z, w)为视点坐标,(x’, y’, z’, w’)为投影坐标,(Px, Py, Pz, Pw)为NDC。
近平面距离为n,远平面距离为f,近平面宽为W,高为H。
由相识三角形可以得到
x’ = -n * x / z
y’ = -n * y / z
映射到NDC ([-1, 1]),可得
Px = x’/ (W/2) = -2*n*x / z * W
Py = y’/ (H/2) = -2*n*y / z * H
由于x’, y’与1/ z有线性关系,只要保证Pz与1/z呈线性关系,即Pz=a/z+b
将(-n,-1), (-f, 1) 代入得
a = 2*n*f / f - n
b = f + n / f – n
Pz = 2*n*f / (f - n)*z + (f+n) / (f - n)
由于最后会除以w分量,所以可以把共同项写入w分量:
-zPx = 2*n*x / W
-zPy = 2*n*y / H
-zPz = -2*n*f / (f – n) - (f+n) / (f - n) * z
w = -z
即等价于
| x || 2*n / W 0 0 0 |
| y | | 0 2*n / H 0 0 |
| z | * | 0 0 - (f+n) / (f - n) -2*n*f / (f – n)|
| 1 | | 0 0 -1 0|

引入视角fovy,即:
h = tan ( fovy/2 ) = (H/2) / n = H / 2*n
w = aspectRatio * h = W / 2*n
d = f - n

=============

December 9, 2006

M3G API beta

Filed under: Programming

After monthes coding… the M3G APIs have alomost been implemented in my KEmulator.

next, i will make a general review.

m3g classes


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值