Tag Archives: Unity3D

Unity3D Mesh Collider vs. Box Collider

Logic tells us that a box collider in Unity3D will be more performant than a mesh collider simply because it is less complex. But I had an impulse a couple days ago to test it out myself. This doesn’t really scratch the surface of every use-case.

I went to the AssetStore and picked out a fairly simple free armchair model. I then made two different prefabs, one with a convex mesh collider, and the other with a box collider.

Chair collidersThen I made a script to spawn a 40×40 grid of a type and let them fall onto two planes. That means a total of 1600 armchairs were doing discrete physics updates, which will kill even the best of the PC master-race.


On the left side we have the mesh collider, and on the right is the box collider. Click the gif to goto a full-size version which has a bar to manually scrub through.

As expected, the summary is that the mesh collider is incredibly slower (sometimes even 20x slower) than the box collider. And in this case the mesh collider is actually pretty simple. Something worth pointing out is that although the graphs sort of line up, their scale is totally different, so take a proper look at the number on it.


Now for bonus points, here is a pretty scene of a stupid amount of spheres attacking the streets of New York.

Unity3D Lightmapping on Azure in the clouds


The Oxford Dictionary defines “lightmapping” as “A process with takes an absolutely ludicrous amount of time“. And if you thought lightmapping in Unity 4 took long, then you’re in for a shock when you try 5.

If you’ve not used Unity before, lightmapping is basically a way to pre-render lighting and shadow data at design-time instead of wasting processing power at run-time. In games, you often get lights (and therefore shadows) that don’t ever move, which leads to the poor CPU having to constantly work out the lighting data every frame, even though it doesn’t need to. The answer is to bake the lighting into a lightmap. That lightmap is just a big image that essentially gets overlaid on your objects and makes it seem like there is lighting in the scene, even though there isn’t.

The problem is that this baking process can take anywhere from a few minutes (for a very small scene) to hours, depending on your scene/level and PC. So I decided to test out if I could do the processing on Azure instead. To do this I used a single reference level and baked it 8 times on each VM/PC I setup. All the settings were the same, and I created a new instance for each one (didn’t just reuse an older installation). Along with just baking, I ran some benchmarks with PassMark PerformanceTest.



I did all the tests on the following machines:

Azure A10: 8 Core Xeon E5-2670, 56 GB RAM.
Azure A11: 16 Core Xeon E5-2670, 112 GB RAM.
Azure G3: 8 Core Xeon E5-2698B v3, 112 GB RAM.
Azure G5: 32 Core Xeon E5-2698B v3, 448 GB RAM.
Surface Pro 3: i7 4650U, 8 GB RAM (256 GB storage model).
Other PC: i7 4770K, 16 GB RAM, 256 GB OCZ Vertex 4 SSD (this is my office machine).

All tests were done with Windows 8.1, Unity 5.0.1f1 and PassMark V8 build 1047.

This is what a 16 Core A11 looks like while lightmapping:A11

And a 32 Core G5:


Before showing some charts, let me summarize my observations (summaries go at the beginning right?). These Azure machines can be really, really powerful, but also really, really expensive. If you’re paying for this sort of hardware, you want to use it all. For the particular use of lightmapping it simply isn’t worth the cost. You’ve got tons of RAM and storage which Unity never touches, plus top-grade hardware at every point in the system.
If all you want to do is bake some lights, and time isn’t a major concern, just build a fast PC out of a bunch of dodgy parts – you simply don’t need the fancy stuff that Azure provides.
What I’d really like is for Azure to add a custom config where I can specify that I want lots of CPU power, but hardly any RAM or HDD space.



Don’t read too much into these values. Although I ran every test multiple times, ensured I was getting consistent results, and did everything I could to test properly, something might have gone wrong without me realizing it.

All of the standard benchmark scores (CPU, Memory, Disk) are higher is better.

Something that interested me with the CPU tests was that the G3 was “slower” than the A10. I did re-run the test a few times to make sure.



Once again, the G3 is slightly below even the A machines here. I’m not sure why, and I would have assumed that all the Azure results would roughly align (which they sort of do besides the G3).



For some reason, even though all these Azure machines have SSD’s, they seem to be pretty throttled. The G3 consistently scored WAY higher than anything else. Maybe it was just a glitch in the matrix, and Azure forgot to throttle my G3 instance. Or maybe they were throttling all my others too much. Either way, the slow HDD speed was noticeable. It took a long time to install and unzip stuff, and the CPUs definitely weren’t the issue.



The time difference between my PC and the G5 is roughly 15 minutes, which makes the G5 double the speed. 15 (or 30) minutes might not seem like a lot, but consider that this was a small reference level. Then also take into account that a game could easily have 30 levels. On my machine that is 15 hours, and doesn’t take into account that you’ll need to bake each level lots of times. And while it is baking the PC is often not useable at all.



This may or may not actually be a useful chart. Either way, it interested me because it portrays the scenario of buying/renting a dedicated machine that does nothing but bake lighting all day.
Fun fact: If you were buying all this hardware, you could buy about 18 of the CPUs in my machine for the same price as the ones in the G5, even though it only bakes double the speed. Obviously, this is just one very specific application which almost definitely doesn’t totally use the power of the G5.



Finally, cost. I couldn’t include my PC or SP3 because I have no “per hour” dollar value for them.
The A10 is clearly the best value per bake but assumes that time doesn’t matter.



VALA: Alpha goes live!

If you’ve been reading this blog for a while you’ll know that quite a while ago we got accepted into AppCampus (seriously, go watch the video that got us into the program) with a game about llama slaughter. We were then invited to Finland for a month-long intensive training camp called AppCademy.

Since then we’ve gone through lots of iterations, scrapped the project multiple times to start from scratch, and had team members change. The core team is now Renier Van Der Westhuizen (who has worked on awesome games like The Harvest for WP7) on art, and myself (Matt Cavanagh) on code. And we’re bringing in a few people here and there to help.

Say hello to Vicious Attack Llama Apocalypse: Alpha (VALA: Alpha in the store)!


Continue reading

Jumpstart to Windows Phone and Unity 3D

EDIT: This is now also syndicated on the Nokia Developer Wiki here.

I’m currently sitting on a 12 hour flight from Amsterdam to Johannesburg, South Africa. So writing this is a far more compelling option than watching bad quality movies on a grainy 7” screen.

The beta of Unity 3D came out yesterday (a couple days ago by the time I will be able to post this) for Windows Phone 8, and it’s awesome. My biggest problem with it at this stage is that because of how easy it is, it feels very close to cheating – but hey, time is money and why should you be wasting time fighting with technical problems when you could be using it to make your idea a reality?

For those living under a medium-sized boulder, Unity 3D is middleware to create cross-platform games quickly and easily – and it excels at that. It is made up of the main IDE (which has free and paid versions) and then an exporter for each platform (desktop is free, but all the mobile platforms require a normal or pro purchase).

Continue reading