Tuesday, January 29, 2013

Why I didn't use Unity

I was recently asked why I didn't go with Unity when developing games for smartphones.  This answer is a bit complicated and I'll have to rewind the clock a little bit, so bare with me here.

Back in mid-2011 or so when I decided to get in to Android and iOS games, I, like many other people, had to figure out what the "best" way forward was.  A lot of Googling happened.  By a lot, I mean *a lot*.

My background was pretty heavily solidified with C and C++ and I was pretty distraught over the idea that to make games on Android, I'd "have to use Java".  Initially, that was my researched impression anyway... sort of like my initial impression that "any serious and portable game needs to use Unity".

I'm going to skip the side story here on why I decided to develop for Android first here rather than iOS, maybe I'll come back to this in another post.  So, I started dabbling on a "Hello World" program for Android, just to get a basic foothold with what the heck I was doing.  It didn't take long to realize that Android was an immensely complicated and complex undertaking to even make a basic app for.  Sure, the xml, business-like app, was relatively straightforward and easy, but implementing game stuff, using OpenGL, and just drawing something to the screen was not a trivial task.

But, you may say, that drawing anything to the screen shouldn't be too hard; and that was true, until I found out that there are way too many screen sizes, resolutions, densities, and all that other jazz to make any sort of easily designed graphics and retain any sort of semblance of consistency between all these graphical problems.

I had read a couple places that Unity "just works" and my god that would have been nice.  I spent an enormous amount of time piecing together tutorials on how all this worked in the Java world.

Anyways, I always like knowing how things work, that way when something goes wrong, I know why and I can ideally can find a better alternative.  Plus, I was planning on 2d games at this point anyways, so something overly complex like Unity was starting to lose weight here as it was seemingly geared for 3d games -- and possibly 3d games only... though of course you could simulate a 2d game experience.

So, while I had Unity "on hold", I kept plowing through some pretty nightmarish lessons learning about the internal issues with Android.  My first harsh lesson hit when I realized calling an opengl function in java to render a fairly large texture ate 60% of my phone's CPU clock cycles just sitting there idle.  Needless to say, when I tried to play the game itself, the frame rate was horrendous.  The background was killing the game play -- turning that off made my game much more playable; yet all these other games out there in the world don't seem to have this problem -- was it because they were using Unity?

No?  I quickly found out that the *only* real way to make good, smooth games on Android was to use native code.  Thankfully, my C background ate this up.  After a begrudging time "porting" my Java code to native code, the game ran a ton faster, background and all!

So, would Unity have helped with all of this?  Definitely maybe.

So, I finally did crack open Unity to see what all the hubbub was about.  It was completely different.  It takes a much more, uh, designer-friendly approach to making games.  You can drag and drop files in to its project file system and things just load for you -- which is a true godsend as that code in my own engine (yes I ended up making my own portable engine) took a long time.

I see that to make things more game-like in Unity, I more or less have to (or at least should) use C#.  Now, I'm not a master with C#, but I have a couple production level projects done with it, so I can dust that skillset off and start working with it.  Right away, I'm almost devastated with some of its inefficiencies that reminded me immediately of Java on Android.

Right away, the fact that C# *forced me to instantiate new Vectors* when I wanted to manipulate thing's locations made me sick.  This will be happening a whole heck of a lot in games and in code, object instantiation, even on something simple like Vectors, will translate (haha, pun) to a lot of wasted CPU time.

Though don't get me wrong, I was very impressed with the "all in one" idea and visual aspect of Unity.  I loved the idea of "attaching" scripts to objects and I didn't have to worry about writing complex update functions, or render threads, or even game state objects while accounting for time deltas to make my game frame rate independent -- nay, all of this was taken care of by Unity; which *really* could speed up development time of any game made using Unity.  The only trade off now is that I'd have to trade in my freedom of design to make only whatever I could using this cookie cutter engine thing.

Unfortunately for game programming, part of the job *is* to do exactly those things that Unity does for us.

For example, in my own engine, I recently finished code to analyze an arbitrarily large array of pixel data.  The code would then "find" the bounding boxes around all graphics found in the pixel data.  The idea here is that it would automatically find all objects so that I can use them as textures in my game via a texture atlas.  Doing this in C# and Unity, though possible, would likely take a much longer processing time to accomplish.  Even using unsafe code.

Don't believe me?  Ok, that's fine, but remember that C# compiles to an Intermediate Language similar to Java, and that adds extra processing overhead at runtime.  Additionally, if you use unsafe code in C#; you're throwing one of the main reasons why you're using C# out the window.  Additionally, the .Net/Mono framework, again like Java, has to make assumptions about your code; usually trading performance for accuracy.  While this is usually A Good Thing (tm), in game development this can be a disaster wrapped up in the inability to be explain why "your game runs like garbage and nobody knows how to fix it".  For example, not all Android devices have a floating point unit.  Yes, that's right, no hardware floats.  Meaning your game is going to run like terrible.  In C/C++, I can detect this lack of FPU then use function pointers to perform binary coded decimal (fixed) operations using integer operations instead of floats.

In a 3d game, like many that would be made with or without Unity, this has massive implications in terms of performance.  If you try to use ANY floats without a FPU (especially multiply or divide operations), the device will use an fpu emulator of sorts and that is *not* something you want happening when trying to render at 30+ frames a second for any sizable amount of vertex data.

While this sounds like a headache, and believe me it is, this would be difficult and maybe even impossible to avoid with C# or Unityscript (javascript gone Unity).

Fast forward a bit of time and we've released 2 fairly "simple" games on Android, 1 of which was ported to iPhone.  The development process had a lot of bumps and rocks in the road.  Unity probably would have saved me a ton of headaches, but I'm still glad I didn't use it.

I now have a portable game engine that works on both Android and iOS using C and now I can just drop my game logic code in to this engine, regardless of platform, and it just works on both.  For fun, I even have a windows port of my games now all using the same code.

Is my way easy?  Definitely not.  Is it visual?  No.  In fact, on several occasions I ended up diving through zlib and libpng's source code to track down errors (all of which were my fault of course).

If I had to do it again would I use Unity?  No.  While the initial development costs would likely be less and the technical requirement of a programmer be eased, games demand optimized code.  Especially games in constrained environments.  Yes, phones and tablets are getting to be powerful and everyone "should" be getting new phones every year, but the reality is, that isn't the case.  In fact, even as of this writing, I believe most people using Android are still using Android 2.x.x and on phone models over 2 years old -- of which a sizable portion still don't have fpu's!

Which brings me to another side tangent.  Let's talk about math.  Even in 2d games, a decent amount of math is going to be coming in to play.  Even on devices containing FPUs, there can be substantial math involved with games and they will be using a lot of floating point math.  I ended up implementing a very "hackish" version of various math functions to speed them up (square root comes immediately to mind) due to the frequency in which they are called.  I got an extra couple frames per second using my hack versions of these functions.  Mind you, in something like Unity, not only will you likely NOT be using things like this, but you will be getting in to forced object instantiating issues on top of the raw math.

Next, let's talk about memory.  Doing naughty things in C and C++ have been known for a long time, and in C# you can do some dirty things also using unsafe code.

I can't stress enough that if you're using C#, you should be running unsafe almost always when making games.  Which sort of goes against one of the arguments for using C# in the first place.

I should probably also talk about garbage collection and managed memory.  In short, C and C++ don't have these issues -- for better or worse.  In games though, I can't stress enough that these things are not your friend.  I will have to have a whole other article on just this topic I bet.

Anyways, I finally also ended up googling for "pros and cons of unity".  There was some pretty thought provoking stuff and a lot of "eh, works for some, not for others"... and I kind of have to agree with that position.  I will also admit here that because I'm more comfortable with C and C++, that was also a big sway against Unity for my personal situation.

I'll wrap this lengthy post up with: so, is Unity for you?

Probaby yes if:
* you already know Unity
* you have an extensive background in C#
* you want a streamlined design/development cycle and don't mind the tradeoffs
* you "just want to make something"
* you want a job with a company that uses Unity (for whatever reason)
* you have little programming experience
* you can afford Unity

Probably not if:
* you already make games for computers/consoles with C/C++
* you are a programmer with a lot of programming experience in general, including opengl
* you like working extremely long hours, scouring through library source code, love the idea of squeezing out an extra second of performance per update cycle
* you are comfortable with command line interfaces and your own version control
* you like managing your own memory
* you want to make any kind of game with any sort of capacity you can imagine
* you can handle your own portability issues
* you can't afford Unity

This was a quick brain dump of my experience with Unity and my own development issues for games on Android and iPhone/iPad.  Hopefully this helped some people, or at least got them to rage a little bit about something.  I'm probably not 100% accurate on everything stated above and I'm more than happy to hear out people's input where I'm wrong.

I will likely get more in-depth with a lot of things mentioned here in another post later reinforcing my position, so stay tuned if you're in to that sort of thing, but I warn you, it won't be pretty.

Tuesday, January 15, 2013

Back to Basics and Understanding Application Performance

The article I'm linking here was written back in 2001, it is "old" in technology terms, yet is still 100% completely accurate and relevant!

http://joelonsoftware.com/articles/fog0000000319.html

Java, C#, .NET, database languages like MySQL, and things like this all benefit from someone with a nice, solid understanding of how "lower level" technology works.  The linked article delves in to a bit of C code in which all these other languages implement on some level.

I will be forthcoming on this, I do not agree, even today, that universities should remove a C programming class from any Computer Science curriculum.  I believe it teaches vital "low level" understanding of fundamental concepts of how software works.  Sure, one level further down and we get to assembly and all that good stuff, and while I think that should be taught as well to some degree - maybe as an elective - C is probably the "best" starting point.

I also agree with Joel, the article's author, with the essence of this quote.

I am actually physically disgusted that so many computer science programs think that Java is a good introductory language, because it's "easy" and you don't get confused with all that boring string/malloc stuff but you can learn cool OOP stuff which will make your big programs ever so modular. This is a pedagogical disaster waiting to happen. Generations of graduates are descending on us and creating Shlemiel The Painter algorithms right and left and they don't even realize it, since they fundamentally have no idea that strings are, at a very deep level, difficult, even if you can't quite see that in your perl script. If you want to teach somebody something well, you have to start at the very lowest level.

Monday, January 7, 2013

C as King of the Programming Languages

Disclaimer: I've always been a fan of the C programming language.  I adore the power and flexibility it has always afforded me, the programmer.

I was a bit surprised at first to see that C ranked #1 in 2012 according to TIOBE.

http://www.i-programmer.info/news/98-languages/5298-the-top-languages-of-2012.html

Then it kind of dawned on me that C was one of the major languages involved with smartphone and tablet development, and that seems to be the hot topic of the day.  For iOS, we have Apple's flavor of C in the form of Objective-C (which takes a while to get use to for the uninitiated) which rose sharply as well giving credit to this theory (since Obj-C really has no other use aside from Apple specific solutions).

Java was hogging the top spot for a while, but it was refreshing to see C going in and out of the #1 spot since at least 1988 according to:

http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html

So, why is this so?  As a software guy, I've always been told that everyone should be using Java, C# or other "managed" languages.  Object-Oriented is *THE* way to go and that's that.  Dynamic typing, garbage collection, and other things are stuff we should want...

Yet, C somehow retains its value?  How and why?

In one of my more subjective rants, I'll say that it is because C let's the programmer be a programmer.  People don't have to learn assembly (though it never hurts of course) and still have platform agnostic source code all while cranking out amazingly optimized (or disastrous) binaries.

It is not for the feint of heart.  Many trivial things need solutions done over and over again.  C taught me lessons constantly that almost always showed that any "general solution" to a problem will be slower than a meticulously crafted one for whatever we're doing.  C gives me this power for better or for worse.

Of course, C can be quite cryptic, excessive, and takes a long time to go from design to solution (usually), and in general, it requires a more skilled and competent developer to actually make a program to solve some problem.  All these things usually mean more money and time invested from a business perspective, so I can understand why the push and drive for "easier" languages has been around for a while.

Don't get me wrong, I like Java and C# as well for what they are, but I would never give a blanket statement that one language is the end all be all answer to everything.

For a binary distribution where performance is needed, C definitely ranks in my top recommended answers.  At the same time, if you have a program that needs to work on multiple platforms, C is well adopted across many platforms and should allow porting with few issues (though when porting issues do come up, they can get complicated).

Languages like Java tried to solve this exact platform portability problem with the idea that you "compile once, run anywhere" (among other things).

Each language has its ups and downs given some situation.  For things like games, I can't recommend enough using C and/or C++.  These languages will *allow* a competent programmer to deploy a well crafted, albeit probably complex, solution.  Will there be problems?  Well, of course!

The *potential* for great software exists with C/C++, but the real issue here is whether a company thinks it is really worth shelling out a substantially larger amount of money for a good programmer just for a chance to make their software run xx% better/faster than some other "easier" language that probably could be deployed in a much faster timeframe.

As a programmer myself, this potential I mentioned is why C should remain king for a long time.  It allows me the ability to do what I want to do and what I can do in to one volatile package.  The learning curve can be steep, and it will force a developer sooner or later to learn more about how CPU's and computers actually work, but this will make them better at their craft at the end of the day.

I may blog more about specifics of this later.