【转】Unity 3D Best Practices

原文链接http://x-team.com/2014/03/unity-3d-optimisation-and-best-practices-part-1/

Part1:Let the CPU time be spent on where it really needs to be.

unity_banner

We’ve been developing games with Unity for a while. As most of our games are targeted for mobile devices, one of our main concerns on the development process is having our projects well structured and optimised.

There are a lot of simple tips and tricks  we’ve been using which make all the difference on any project’s overall performance.

In order to demonstrate each topic’s usefulness,  I’ll be presenting profiling data retrieved from a demo, running on an iPad  Mini.

Cache Component References

Always cache references to components you’ll need to use on your scripts.

Accessing a game object world position from its script is easy, all you need to do is call transform.position. The problem is that this comes with a price.

When accessing transformrenderer or other built-in component getter inside a class it’s internally calling a GetComponent<ComponentName>() which is slow.

I’ve created a simple demo where 800 boxes are moving around and bouncing with each other. Their movement is done by directly updating their world position on theUpdate() method.

I’ve added a switch which controls where each box uses either transform.positiondirectly or a previously cached transform variable.

CacheComponents.png

Figure 1 – Caching Components

Without caching components, we get an average of ~30ms spent on  scripting, as with caching we decrease that value to around ~23ms.

You need to add a couple of extra lines on your scripts to cache each one of the components you’ll need to use, but as you can see from the profiling data, it’s totally worth it.

 

Cache Objects References

Always cache object references when you need to access objects from the scene.GameObject.Find(…) is very tempting to use but is also very, very slow.

GameObjectFind.png

Figure 2 – Using GameObject.Find(…) vs Cache Reference

It’s very clear the impact it has on the overall performance just by looking at this image. In the demo each object is communicating with a dummy manager either by using GameObject.Find(…) or a previously cached reference.

The overall script time reduces from an average of ~41ms to ~23ms just by using a cached reference.

Cache Game Objects

This is something that a lot of people ask online.

No, it’s not necessary to cache game objects; you can use gameObject directly on your script without having to worry about any impact on your performance.

In this next image I switched back and forth between using a cached game object and the gameObject itself, and as you can see, there’s no change at all on the overall script execution time.

CacheGameObject.png

Figure 3 – Caching Game Object

 

Memory Allocation

Always consider creating and using Object Pools instead of always instantiating new objects on the fly.

This will bring a lot of advantages as it will lead to less memory fragmentation and make the Garbage Collector  work less.

It’s also important to be aware that the Garbage Collector will become slower as the memory usage increases, because it has more memory to scan in order to find and free unused data.

From Unity’s official documentation, the GC on an iPhone 3 takes about 5ms on a 200KB heap size as with an heap of 1MB it will take 7ms.

Also when allocating memory from a highly fragmented heap block, you most likely don’t have the amount of contiguous memory needed which leads Unity to allocate a new heap block. This leads to a performance spike as GC is forced to kick in and try to free some memory.

To better exemplify Object Pool vs On the Fly Allocation, I tweaked my previous demo a bit.

Now each time I touch the iPad’s screen, a new box is created by either allocating a new instance or getting it from an Object Pool. Each time a box collides with each other or with the screen boundaries it will be either destroyed or returned to the pool.

NotUsingPoolSystem.png

Figure 4 – Without pool usage

UsingPoolSystem.png

Figure 5 – With pool usage

 

It’s pretty clear with both these images to see the memory behaviour I was describing. With a pool usage, although I need to allocate a lot of memory from the start to create the pool itself , I never have to do it again and the GC almost never kicks in.  As without a pool, Unity is always in need to allocate new heap blocks, and for each one the GC is activated, creating a sawtooth wave pattern on the allocated memory.

This is a very simple demo on which I could easily use a pool for everything, but on a real situation not everything can be poolable.  We can however control a bit when the GC does its job and force it to do some cleanup by calling System.GC.Collect(), usually done when you know you have some CPU time to spare.

 

Sharing Materials

Unity has a neat operation called batching. What it does is combine objects that share the same material ( among other properties ) and draws them together on a single draw call.

As each draw call represents an increased overhead on the CPU side, decreasing their amount is a guaranteed boost on your game’s performance.

It’s particularly easy to take advantage of this feature when working on UI and on 2D games in general, where you store all your textures in a common atlas using a common material.

To demonstrate the impact on the number of draw calls, I’ve added a new feature on my existing demo where I can change in real time if each of the bouncing boxes are using a shared material or each have its own.

DrawCallsWithoutBatch.png

Figure 6 – Own Material (UnityEditor Stats on PC)

IMG_0019.PNG

Figure 7 – Own Material

DrawCallsWithBatch.png

Figure 8 – Shared Material  (UnityEditor Stats on PC)

IMG_0021.PNG

Figure 9 – Shared Material

As you can see, not using a shared materials generates 803 draw calls ( 800 for each box, and the others for UI and the screen boundaries ) and runs at ~6fps.

By switching to a shared material the number of draw calls drops to 3, indicating that all boxes are drawn in a single draw call and increasing the demo frame rate to ~12fps.

More to come, stay tuned

That wraps up Part 1. Each of these topics are extremely easy to implement from the start, and as you could see from the profiler data, they will make all the difference.  Don’t leave the optimisation process to the end.

Part2:Physics

This second part in our series will be entirely focused on Unity’s Physics engines.

I’ll present, similarly to the previous article, simple topics which are easy to use and will optimise the physics engine usage.

Let’s begin!

Layers and Collision Matrix:

All game objects, if not configured, are created on the Default layer where (by default) everything collides with everything.  This is quite inefficient.

Establish what should collide with what. For that you should define different Layers for each type of object.

For each new layer, a new row and column is added on the Collision Matrix. This matrix is responsible for defining interactions between layers.

By default, when adding a new layer, the Collision Matrix is set for that new layer to collide with every other existing one, so it’s the developer’s responsibility to access it and setup its interactions.

By correctly setting layers and setting up your Collision Matrix, you will avoid unnecessary collisions and testing on collision listeners.

For demonstration purposes I’ve created a simple demo where I instantiate 2000 objects (1000 red and 1000 green) inside a box container. Green objects should only interact with themselves and the container walls,  the same with red objects.

On one of the tests, all the instances belong to the Default layer and the interaction are done by string comparing the game objects tag on the collision listener.

On another test, each object type is set on their own Layer, and I configure each layer’s interaction through the collision matrix. No string testing is needed in this case since only the right collisions occur.

Figure 1 : Collision Matrix config

 

The image below is taken from the demo itself. It has a simple manager which counts the amount of collisions and automatically pauses after 5 seconds.

It’s quite impressive the amount of unnecessary extra collisions occurring when using a common layer.

Figure 2 : Collisions amount over 5 seconds

For more specific data I also captured profiler data on the physics engine.

Figure 3 :  Common vs Separate Layers physics profiler data

 There’s quite a difference on the amount of CPU spent on physics, as we can see from the profiler data, from using a single layer ( avg ~27.7 ms ) to separate layers ( avg ~17.6ms ).

Raycasts:

Raycasting is a very useful and powerful tool available on the physics engine. It allows us to fire a ray on a certain direction with a certain length and it will let us know if it hit something.

This however, is an expensive operation; its performance is highly influenced by the ray’s length and type of colliders on the scene.

Here are a couple of hints that can help on its usage.

  • This one is obvious, but, use the least amount of rays that gets the job done

  • Don’t extend the ray’s length more than you need to. The greater the ray, the more objects need to be tested against it.

  • Don’t use use Raycasts inside a FixedUpdate() function, sometimes even inside anUpdate() may be an overkill.

  • Beware the type of colliders you are using. Raycasting against a mesh collider is really expensive.

    • A good solution is creating children with primitive colliders and try to approximate the meshes shape. All the children colliders under a parent Rigidbody behave as a compound collider.

    • If in dire need of using  mesh colliders, then at least make them convex.

  • Be specific on what the ray should hit and always try to specify a layer mask on the raycast function.

    • This is well explained on the official documentation but what you specify on the raycast function is not the layer id but a bitmask.

    • So if you want a ray to hit an object which is on layer which id is 10, what you should specify is 1<<10 ( bit shifting ‘1’ to the left 10x ) and not 10.

    • If you want for the ray to hit everything except what is on layer 10 then simply use the bitwise complement operator (~) which reverses each bit on the the bitmask.

I’ve developed a simple demo where an object shoots rays which collide only with green boxes.

 

RaycastDemo.png

Figure 4 : Simple Raycast demo scene

From there I manipulate the number and length of the rays in order to get some profiler data to backup what I’ve written earlier.

We can see from the graphics below the impact on performance that the number of rays and their length can have.

NumberOfRaysGraph

Figure 5 : Number of rays impact on performance

RaysLengthGraph

Figure 6 : Rays length impact on performance

Also for demonstration purposes, I decided to make it able to switch from a normal primitive collider into a mesh collider.

 

MeshColliderScene.png

Figure 7 : Mesh Colliders scene ( 110 verts per collider )

PrimitiveVsMesh.png

Figure 8 : Primitive vs Mesh colliders physics profiler data

As you can see from the profile graph, raycasting against mesh colliders makes the physics engine do a bit more work per frame.

Physics 2D vs 3D:

Choose what Physics engine is best for your project.

If you are developing a 2D game or a 2.5D (3D game on a 2D plane), using the 3D Physics engine is an overkill. That extra dimension has unnecessary CPU costs for your project.

You can check the performance differences between both engines on a previous article I wrote specifically on that subject:

http://x-team.com/2013/11/unity3d-v4-3-2d-vs-3d-physics/

Rigidbody:

The Rigidbody component is an essential component when adding physical interactions between objects. Even when working with colliders as triggers we need to add them to game objects for the OnTrigger events to work properly.

Game objects which don’t have a RigidBody component are considered static colliders. This is important to be aware of because it’s extremely inefficient to attempt to move static colliders, as it forces the physics engine to recalculate the physical world all over again.

Fortunately, the profiler will let you know if you are moving a static collider by adding a warning to the warning tab on the CPU Profiler.

To better demonstrate the impact when moving a static collider, I removed the RigidBody of all the moving objects on the first demo I presented, and captured new profiler data on it.

 

WithoutRigidBodies.png

Figure 9 : Moving static colliders warning

As you  can see from the figure, a total amount of 2000 warnings are generated, one for each moving game object. Also the average amount of CPU spent on Physics increased from ~17.6ms to ~35.85ms, which is quite a bit.

When moving a game object, it’s imperative to add a RigidBody to it. If you want to control its movement directly, you simply need to mark it as kinematic on its rigid body properties.

Fixed Timestep:

Tweak the Fixed Timestep value on the Time Manager, this directly impacts theFixedUpdate() and Physics update rate. By changing this value you can try to reach a good compromise between accuracy and CPU time spent on Physics.

Wrap-up: 

All of the discussed topics are really easy to configure/implement and they will surely make a difference on your projects’ performance as almost every game you’ll develop will use the physics engine, even if it’s only for collision detection.


  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值