Wednesday, December 18, 2013

Model-View-Presenter architecture for game development, it's not just for enterprise.

I'm an absolute stickler for a good architecture. Nothing irritates me more then spaghetti code with deep coupling between classes, god classes and a lack of any type of modularity. When I begin to develop any application of any type, I immediately start to think how it's going to be structured. Thankfully, there are tons of battle proven patterns to choose from the family of MV*. My favorite as of late has been Model-View-Presenter (MVP).

I don't want to go too deeply about what MVP is and how it differs from other MV* patterns (such as the classic MVC), but here's a quick diagram stolen from Wikipedia.


The key takeaways are that views are dumb and easily interchangeable, presenters contain the business logic, a presenter updates ideally one (but can update more) view, application state lives in the model objects (which are as equally dumb as views). That's all you should really need to know to follow the rest of this post, but please read up on MVP more if you're not familiar with it and understand the difference between MVC.

When I first began game development, I had a hard time structuring my code. I initially couldn't grok how to apply all of the golden rules of regular GUI development to games. Recently, it started to click. You can apply a MV* pattern to games, very easily in fact, and create a clean code base that's organized, maintainable and can be easily changed (we all know how volatile a game's design and feature set can be!). So lets talk about how an MVP pattern can be applied to a Unity code base.

I'm not going to provide any specific code in this post (there's good reason why as you'll see later). This is strictly theory.

Let's say we have a player prefab. Normally, you may just write a bunch of specific scripts to do one thing and one thing only (hopefully) and attach each different script to the prefab. While this does work, I find it chaotic, especially when other scripts need to start talking to each other or one script needs to have its behavior changed slightly for one specific type of prefab. To do things the MVP way instead, we're going to apply a two scripts to the prefab called PlayerView and PlayerPresenter.

PlayerView will represent the View portion of MVP (well, duh!). PlayerView will contain zero game logic. It will strictly be responsible for handling the visual representation of the player, accepting input to pass along to the presenter, and exposing important properties that you may want to have adjustable in the Inspector view of the Unity editor, like health, walking speed, etc. PlayerView will listen for input from the player and pass along the input to the view's backing presenter via events and passing the presenter model objects with the necessary data.

PlayerPresenter will represent, can you guess it, the Presenter portion. Now earlier I said presenters contain the game logic, and in a lot of cases this is true, however I'm going to throw another pattern at you (I'M GOING DESIGN PATTERN CRAZY). Instead of putting all of the game logic for the player in PlayerPresenter, we're going to make use of the command pattern, or a variation of it. PlayerPresenter will be responsible for creating the necessary model objects (based on data from PlayerView) and sending those model objects to Task objects, which handle the actual game logic.

Model objects are very dumb. They simply encapsulate data to pass around. A bunch of properties, nothing more.

Task objects live to do one thing, and one thing very well. They accept model objects from the presenters, do a bunch of work, calculate player score or create a projectile object to spawn, for example, and if necessary sends the results of the work back to the presenter to update the view with. This creates extremely modular, reusable game logic that can be accessed from any presenter that calls it. This allows us to create flat class hierarchies as well, which is a great thing. We could let the presenters handle the game logic and perform the actual work, and in some cases you may, but that game logic isn't easily shared elsewhere and you risk either creating deep class hierarchies to share the logic, or repeating code.

So let's step back and see how a real example would play out. Let's go through an example of a player pressing the shoot button to fire a rocket from his rocket launcher.
  1. PlayerView receives a shoot input signal, notifies PlayerPresenter
  2. PlayerPresenter receives notification of the input, creates a SpawnProjectileModel model object of current player position, direction and weapon type (rocket launcher for this example) to send to the SpawnProjectileTask.
  3. SpawnProjectileTask receives the model object sent from PlayerPresenter, and spawns a new rocket launcher prefab with the data provided via the SpawnProjectileModel model object. 
  4. PlayerPresenter receives notification from SpawnProjectileTask that the rocket spawned successfully and notifies PlayerView.
  5. PlayerView updates its AmmoCount property to deduct one, which updates the ammo count graphics. 
  6. Done!
This may seem like a lot of steps and indirection to deal with to just fire a rocket, but you can easily change the games look and behavior without a ripple effect. Changing the player from a bad-ass marine to a human-hating robot requires you to only change the View class. Enemies can call the same SpawnProjectileTask as the player does and if the game logic should ever need to change, simply update SpawnProjectileTask and both Enemy and Player pick it up without having to touch either. 

Now the reason I didn't provide any actual code and stuck purely to theory is because there is a fantastic framework for Unity to do everything I described so far (plus more), StrangeIoC, which has excellent code samples and diagrams in the documentation and I felt it does better justice. StrangeIoC is an inversion of control, MVP framework. It's fairly new, but I'm using it for Overtime and I don't think I can code a Unity game without it. It's continually evolving and if anything I've talked about in this post is jiving with you, I highly suggest you give StrangeIOC serious consideration. Hopefully, I've convinced you to start considering a MV* type architecture for your next game. 


Tuesday, December 17, 2013

2D Platformer Collision Detection in Unity

NOTE: Please see my addendum regarding this solution, which is ultimately flawed as presented in this post.

Unity is a 3D engine that comes with built-in physics engines (PhysX for 3D, Box2D for 2D). However, if you're aiming to develop a 2D platformer, you'll quickly find that it's extremely difficult, I'll go as far as to say impossible, to achieve that "platformer feel" using these physics engines. For your main entities, you're going to have to roll a variation of your own.

Furthermore, if you attempt to use the supplied character controller package for your player in a 2D platformer, you'll also quickly discover that the collision detection and overall controls just don't feel right, no matter how hard you tweak it. This is primarily due to that the character controller package uses a capsule collider, which makes pixel perfect collision detection on edged surfaces problematic. So once again, you need to roll your own controller and collision detection system.

Since Unity is a 3D engine and regardless of the type of game you're developing (3D or 2D), you're game is being developed in a 3D space. You probably can achieve some canonical tile-based solutions for collision detection (perform check-ahead on the tile the player is heading into, and determine appropriate collision to take, if any, or example), but it's best not to wrestle against the engine. The best solution I've found is to use ray casting.

Ray casting seems to refer to different things (see Wikipedia), but the ray casting I'm referring to is the method of casting rays from an origin towards a direction and determine what intersects the ray, if anything. We can use this method to handle collision detection, casting rays from our player in both the x and y axes to learn about the environment surrounding the player and resolve any collisions.

The basic steps of the algorithm is as follows:
  1. Determine current player direction and movement
  2. For each axis, cast multiple outward rays
  3. For each ray cast hit, alter movement values on axis
I want to note that a lot of the following code is originally based off of the fantastic platformer game tutorial found on the Unity forums. However, I've heavily modified it to fix some bugs, mainly with corner collisions, which we'll go into depth.

Here's the entire class that performs the ray casts for collision detection.

It's important to note that this class is called inside of a separate entity controller (BasicEntityController) that handles calculating acceleration and creating the initial movement Vector3 object. BasicEntityCollision takes the movement and position Vector3 objects and adjusts them based on any possible collision detected from the ray casts.

The Init method does some one time initialization of required fields, such as setting reference to the controlling entities BoxCollider, setting the collision LayerMask, etc.

The Move method accepts two Vector3 objects and a float. moveAmount is, as the name implies, the amount to move before collision detection, as calculated by BasicEntityController. position is the current entity position in the game world. dirX is the current direction the entity is facing.

Move will determine the final x (deltaX) and y (deltaY) values to apply to moveAmount after all collision detection. Move starts the ray casting along the y-axis of the entity, followed by the x-axis of the entity, but only if they are moving left or right; we won't cast x-axis rays when the entity is idle. We then set the finalTransform based on deltaX & deltaY and return it so that the entity can finally use it to Translate!

Let's dive into the two key ray casting methods, yAxisCollisions & xAxisCollisions. First, note that both methods perform at least three different ray casts along the entities BoxCollider, on each axis. This allows us to get complete coverage for the entity.

Each line represents a ray cast.

yAxisCollisions starts by determining which direction the entity is currently heading along the y-axis (up or down), and calculates separate x and y values to be used to create the Ray objects to be casted along the box collider (from left to right, top or bottom). yAxisCollisions calls two different for loops based on which way the entity is currently facing. If we are facing towards the right, it'll start the ray casts on the right side of the entity, else it'll start on the left side of the entity. This was done to prevent a bug that saw the entity falling through the collision layer when moving to the right and downward due to a gap that was being created (because we break the for loop after the first ray hit we encounter) when the entity would collide with the corner of a tile.

When Physics.Raycast returns true, that means a ray cast has hit. We obtain the distance of the hit from the ray origin, and calculate a new deltaY to apply to the final move transform. We pad this value slightly to prevent the entity from falling through the collision layer accidentally.

We pad the deltaY value slightly to keep the entity above the collision layer, to avoid accidental fall throughs.

With our deltaY calculated, we move on to the x-axis (again, only if the entity is actually moving along the x-axis). xAxisCollisions is very similar to yAxisCollisions, but simpler. We don't worry about which direction the entity is facing, instead we worry whether the entity is either moving on the ground or currently in mid air. If we are moving mid air, there's a high risk of landing on the corner of a tile, which we could fall through. To help prevent that, we cast a larger range of ray casts along the x-axis (4 instead of 3) with the outer 2 rays that are casted slightly outside of the box collider width. When a hit is detected on the x-axis, we simply set our deltaX to 0 and return it.

When moving through the air, we cast a wider range of rays slightly outside of the entity's boxCollider width.


And that's the magic behind using ray casting to perform collision detection. The finalTransform will be sent to the entity to be used with the Translate method. Here's a small video clip showing the Debug.DrawRay calls. When a ray is hit, it's colored yellow.


For further reading on the topic of a 2D platformer controller for Unity, there's an excellent blog post on Gamasutra by Yoann Pignole that goes into great detail.

You can also see a more complete implementation of the collision detection code at the following Gist.

Friday, December 13, 2013

Overtime Developer Log 0

Overtime (working title) can be best described as a Unreal Tournament demake. For my Ludum Dare 26 entry (http://mindshaftgames.appspot.com/games/LD26/ld26.jsp), I asked myself “what if Unreal Tournament was made for the Atari 2600?”. The game, while very simple (hey, I only had 48 hours to make it!), was a lot of fun. I decided to take the game idea a bit further and ask myself “Okay, but what would Unreal Tournament look like if it was made for the Super Nintendo?”, and that’s essentially the game I’m looking to create.

Most of my game development experience is in either XNA or ImpactJS, but I decided to use Unity this time around, mainly for the multi-platform support, but I quickly learned there are tons of other benefits to using Unity too. That said, I had to learn how to create a 2D game inside this 3D engine. Luckily, there are fantastic 2D plugins to help with creating and managing sprites, to which I’m using 2D Toolkit. The biggest hurdle (initially) was how to handle collision detection and platformer style physics. While Unity can handle these problems out of the box, they are ill suited to achieve that true 2D platformer feel in my opinion, so after checking out a few articles and tutorials on the subject, I rolled my own using the battle proven raycasting method. 

Early tech prototype


With that in place, I began working on a prototype of a basic deathmatch mode inside a small, single screen arena. I unfortunately don’t have any videos of the initial playtest, but the results were very positive, resembling the intense, twitch based experience of a Unreal Tournament deathmathch or even a Super Mario Bros. Battle game. Thus, I felt it was worth fleshing out a full design and moving forward with it.

On a sidenote about Unity, I found it initially difficult to create an organized code base (a common complaint about Unity scripting, I’ve discovered). I scripted my prefabs in a somewhat traditional Entity based system, very similar to how ImpactJS is structured. I made some decent progress before deciding to rewrite the entire game using StrangeIOC (http://thirdmotion.github.io/strangeioc/), a MVP-like inversion of control framework. Ignoring the benefits of inversion of control alone, it does help enforce excellent code structure, separation of concerns, event handling, all while preventing deep class hierarchies. If you’re a Unity developer, I would check it out.

This is the game in its current form. 
Gameplay footage


Some (boring) screen shots



I only have one playtest map setup but there are a few weapons available, support of split-screen cameras, players can double jump, energy (ammo) packs, level triggers (one which causes an earthquake) and first blood & double kill checks, to summarize. There are more weapons I need to implement, as well as the expected game modes of CTF, team deathmatch, but I also have other unique game modes in mind. For assets, I’m currently using open source sprites I find and I’m using bfxr to generate all sound effects. Once I’m further along development, into an alpha stage, I’m hoping I can capture interest from actual sprite artists and composers. 

Moving forward with development, the two big questions I have to consider are do I include network multiplayer and do I include bots? These are two areas I have no experience in, and I’m risking blowing the game out of scope. Regarding networking, from what I’ve gathered so far on the topic, I should make a decision to include it or not now, rather than try to shoehorn it in later (many things need to change, from instantiating objects, to communicating player input). Since I’ve already made fair progress on the game already, and already went through one total rewrite, I know I’d have to rewrite a majority to get networking involved, since I’ve foolishly waited too long to consider it. Thus, I’m leaning towards not including networking. Yes, this does come off as “lazy” on my part, I admit. In my defensive, multiplayer indie games generally don’t have a large enough pool of players after immediate launch to sustain a multiplayer community anyways (see Gun Monkey).

Bots are something I do want to include, though it’s a huge engineering effort. I essentially need to create Quake 3 Arena level of sophisticated AI (you can read Jean-Paul van Waveren’s thesis paper on Quake 3’s bots http://fd.fabiensanglard.net/quake3/The-Quake-III-Arena-Bot.pdf) in order to make it work. It’ll be a lot of work, tons of trial by fire, but I’d love to be able to provide a singleplayer experience as well, since I’ve personally spent the majority of my Unreal Tournament time playing against bots! Outside of finite state machines and very basic behavioral trees, AI is a new frontier for me, but I’m eager to face it. The biggest problem I see ahead is getting bots to navigate their environments.

So that’s my current progress right now. I’ll probably be tackling bots next before continuing to implement all of the planned weapons and game modes, because I hate huge problems nagging at me and would love to solve it right away. Plus, it’s becoming increasingly difficult to playtest a multiplayer only game! I’m only working on this (barely) part-time, so progress will be slow, but hopefully at a steady pace.

Friday, October 18, 2013

* R E D T H R E A D H I J A C K *

One of the most influential books I've ever read as a developer is Masters of Doom. It's an inspiring story for any game developer or entrepreneur.

Jeff Atwood just made an equally inspiring post about id Software's story, You Don't Need a Million Dollars. He's right. When I was a filmmaker so many years ago, the same message was drilled into me. There are no barriers anymore for aspiring creative minds to create something great. Sure, you may have that full time job, but there's always time afterwards to chase the dream, and the resources available now are unprecedented. I want to say overly so, as to lose the whole "art through adversity", but I've been told that's too cliche to say...

Take the following excerpt from Masters of Doom;

Carmack turned red. “If you ever ask me to patent anything,” he snapped, “I’ll quit.” Al assumed Carmack was trying to protect his own financial interests, but in reality he had struck what was growing into an increasingly raw nerve for the young, idealistic programmer. It was one of the few things that could truly make him angry. It was ingrained in his bones since his first reading of the Hacker Ethic. All of science and technology and culture and learning and academics is built upon using the work that others have done before, Carmack thought. But to take a patenting approach and say it’s like, well, this idea is my idea, you cannot extend this idea in any way, because I own this idea—it just seems so fundamentally wrong. Patents were jeopardizing the very thing that was central to his life: writing code to solve problems. If the world became a place in which he couldn’t solve a problem without infringing on someone’s patents, he would be very unhappy living there.

I've always felt the exact same about patents, but we have to consider the world we live in today. Patent trolls are real, and if you don't protect yourself and your business by filing patents, you may find yourself waking up from your dream very, very quickly. Understand why Carmack said this in the 90s, and understand why anyone should file a patent today.

Oh wow, this thing still works??

It's been awhile, seven months to while. A lot has happened during those seven months. I did some game jams (http://mindshaftgames.appspot.com/games/LD26/ld26.jsp), switched technologies, and started developing a game that will truly be my first real, full release.

After Ludum Dare 26, I began to really think about my next game, which I wanted to be my first real release. A full, complete, polished game that people would (hopefully) pay money for. This required me to really start growing my game design and project planning skills. I had an initial idea of doing an elaborate Metroidvania game in ImpactJS (can you immediately see the impending failure?). I created a bunch of design docs and even prototyped something in ImpactJS but ultimately killed it. It was too ambitious and to be honest, there was another genre of game that I simply would like to play much more. We'll get to that in a minute, but let's talk about ImpactJS more.

ImpactJS is a great, fantastic game engine. As new as HTML5/JavaScript game development might be, ImpactJS showed me that it's a very viable solution for developing a tile based 2D game. Not only can it be deployed to the web with no plugin (sweet!) but to mobile and even as a desktop .exe through Node Webkit , which I got up and running and was very impressed with; the performance is incredible.

All that said, I ultimately decided to switch (back?) to Unity. There are few reasons why I did, the main one being the JavaScript language itself. Firstly, I greatly enjoy first class functions (I truly feel functional programming is on the rise due to its implications with concurrency, but that's another blog post in itself), but if you give me the choice of a dynamic or static language, I'm taking the static language. And for the scale of the project I had envisioned, doing it in a dynamic language seemed crazy. Doable, absolutely, but crazy.

Also, Unity handles a few things better, primarily networking, split-screen cameras and gamepad controllers. ImpactJS can technically handle gamepads through the Gamepad API (which I did get implemented and working in the prototype), and networking through WebSockets, but right now, all of that is a lot of trouble and kinda hacky. Also, no split-screen camera support is a deal breaker, and while I never tried to actually implement them, from what I gather from the ImpactJS documents and source code, it doesn't seem possible.

And to be completely honest, the other reason for switching to Unity was simply because I tried doing a 2D platformer Unity in the past, but ultimately failed to figure out collision detection properly and couldn't grasp how to doing things "The Unity Way", as I've coined (just look a few posts back). That nagged the shit out of me. I had to go back and figure it out. It was a defeat I couldn't leave alone. I ultimately did begin to grasp the concept of using Raycasts to detect collisions, and started to understand the Component-Entity system that Unity enforces. It must sound crazy that this was a driver for me dropping ImpactJS, but there was a problem that I couldn't initially solve, and it bugged me to no end.

Also, with my original Metroidvania idea, I was aiming not only desktop, but mobile, and ImpactJS didn't provide ideal performance on Android through CocoonJS. This is my own individual findings, and it could be due to my shitty code, but I've experienced better performance in Unity, so that was another deciding factor. All that said, I've ultimately decided to drop porting to mobile with my new game design.

So what exactly have I've been developing the past several months?

 *drumroll*

I don't have a title yet. *fart sound*. However, imagine if Unreal Tournament was developed for SNES. That's essentially the game I'm out to make. A 4-player arena platformer. This genre has seen a recent influx with games like TowerFall, Gun Monkey, The Showdown Effect and Atomic Ninjas. I've only played Gun Monkey and The Showdown Effect so far, and watched many videos of TowerFall (I will buy and play the PC release when it comes out!), but they seem to offer a much different experience than what I'm aiming to provide. I want to recreate that intense, 90s/early 00s competitive FPS experience, but in 2D platformer form. I've done an initial playtest with some close friends recently, and I seem to be on the right track! There was tons of shouting, screaming, and intense action.

I want to get into further technical details of what I'm doing in Unity, as well as some creative details, but I'm going to save all that for future (near future, I promise) posts, because there's a lot to talk about. Even though this is exactly what this blog may seem like so far, I hate opinions, give me facts with insights. I'll be sure to give you plenty of facts with some crazy insights. I hope you find it useful.

Monday, March 18, 2013

We have Impact

I've decided to kick Unity to the curb. After using it for a few weeks, attempting to create a very simple 2D sprite based platformer using 2D Toolkit, I've come to the conclusion that it's not the best tool set to use for that type of game. There's simply too much wrestling involved with the Unity engine to complete even the most simple of tasks, for example tile based collision detection. Perhaps it's my inexperience, but when the toolset I'm using is hindering my productivity and progress, rather than boosting it, I tend to reevaluate the situation.

So the hunt for a new game engine/framework/toolset/etc began! There were a few criteria that I needed to have satisfied.
  1. Deployable to the web (option to mobile is nice to have as well)
  2. Good documentation and community support
  3. Remove the need to write the majority of boiler-plate, 2D game engine code, yet allow me to extend it as well
I had my eye on ImpactJS for awhile, and after determining what my needs were, it seemed to be the best choice (even with the license fee). And so far, it absolutely has been. You can see what I've managed to cobble together in only about two weekends worth of time. I'm going to reserve a full "review" of ImpactJS for another post, but I'm extremely happy with ImpactJS. Worth noting, I also get to use my favorite IDE, IntelliJ IDEA! Nice :)

Oh! And I've decided to join One Game a Month. My March entry will be a simple Lode Runner-like (linked above). Obviously, it's very much a work-in-progress as of this posting, but I fully expect to have it finished, and somewhat polished, for the March deadline. 

Tuesday, February 26, 2013

Get to the script

Related to my last blog post regarding accessing setter methods of a prefab script, originally, I was accessing the script through a class field of type Transform, which was attached to a enemy bullet prefab. To get access to the script, I needed to call GetComponent from the returned reference by Instantiate. See the Gist below.



From what I've been told by internet anonymouses, GetComponent is an expensive call to make, especially given the high frequency that fireBullet gets called. A better alternative? Change the class field type from Transform to the script class (EnemyBullet, in my case).



Now you avoid the GetComponent call and get a direct reference to the script. This works in my case because the Enemy class only really needs reference to the EnemyBullet script of the particular prefab. It doesn't use the prefab's transform or other components. If you have a case where you're accessing multiple components of a particular prefab, then you can't avoid the calls to GetComponent. However, figure out which component you use the most and set that as the class field type, to reduce the number of GetComponent calls you make. 

The Unity Way, or....THE UNITY WAY!

When you use a complete "out-of-the-box" game engine like Unity, you're going to be forced into using it in ways that you may not like. Seems obvious, right? You're going to have to do things that may seem counter-intuitive and just deal with the design decisions made by the Unity team. You"ll need to do things "The Unity Way". If you're coming from a different game development environment, regardless of what it is (C++/DirectX/SDL, XNA, etc), there's going to be a great learning curve when starting with Unity. 

A good example of this is how you instantiate prefab objects in a scene. Instead of using the new operator to instantiate a new object (or making use of a factory method), you instead must make use of Unity's Instantiate method. Now there is some sound reasoning for this. When you instantiate a prefab, you're actually instantiating all of the multiple components that make up a prefab as well. The Unity documentation explains it well. You can think of the Instantiate method as a somewhat non-traditional factory method that creates and returns your complete prefab instances, so that you don't have to "new" up the code and piece together the prefab by hand.

There is a downside to this. What if you want to change a script parameter of the prefab at instantiation? Hmmmm..., the Instantiate method doesn't support parameter passing, unlike a traditional constructor would. Instead, after you instantiate the prefab, you have to obtain a reference to the attached script, and modify any fields through setter methods. 



The above Gist is from my Enemy script. My enemy prefabs travel at a random velocity. Because of this, I need to be able to adjust the speed at which the bullets they fire travel at as well; I was running into scenarios where when the enemy fired a bullet, they both might end up traveling at the same, or near the same velocity. Instead of simply passing speed + BULLET_SPEED_MULTIPLIER as a constructor parameter upon instantiating, I need to do things "The Unity Way". 

So I do things "The Unity Way" and I change the fields of my script through setters instead. Not the end of the world. OR IS IT?!?!?! When I went to test my BULLET_SPEED_MULTIPLIER, I noticed that the bullet speeds were unaffected. Why? Because I was setting the enemy bullet speed in the Start method of my EnemyBullet script.



If you read the Start documentation carefully, you'll find out that "Start is called just before any of the Update methods is called the first time". It seems that the execution path is essentially (ignoring several other methods being called in between, I'm sure):

fireBullet -> Instantiate -> setSpeed -> Start -> Update 

The lesson of the story is to be careful of what fields you set in your Start method, because that's the value that will be used regardless if you used a setter method immediately after instantiation. It could be argued that this bug would have happened even if I was using a constructor, because Start would have gotten called by the Unity engine later on anyways. I would then counter argue that I probably wouldn't have even used Start to initialize key variables, if I was using a constructor ;)



Saturday, February 23, 2013

Let me get a flash!

I wanted to provide the player feedback upon getting hit. A good way to do this is to "flash" the player, or alternate very quickly between various colors.



There are a few ways you can do this in a 2D game when you're using sprites. However, I'm not using sprites just yet (I'm using the Unity provided basic shape models), so I have to start manipulating the applied materials of the player cube model directly. Simply, on the player object material, you can just interpolate between two (or more, if you're feeling bold) colors rapidly for a short duration.

The following Gist may not be the most elegant, and in fact it's very inefficient, but it's my First Stab Solution™ at the problem. After you're done laughing, be kind and call me out in the comments and show me a better way!



So what do we have? When collision occurs with the player, we begin a InvokeRepeating of the method colorFlash. The colorFlash method will lerp, or interpolate, between the two Colors in the flashColors array. By using the Matf.PingPong method to calculate a lerp time, we add variance to the length of time used for the lerp. The lerp parameter passed into Color.Lerp will be clamped between 0 and 1. If it's 0, the first color is returned, if 1, the second. That's why we divide the result of Mathf.PingPong by flashLerpDuration, to receive a float value between 0 and 1, which allows Color.Lerp to return even more variance in color. 

Unfortunately (or fortunately, depending on how you look at it), when you use InvokeRepeating, it'll call the specified method until you make a call to CancelInvoke (I would like for there to be an implementation of InvokeRepeating that accepts a float value as a parameter to specify how many seconds the invoking should last). So, we have to setup a timer, flashInvokeDuration, and when that timer reaches 0, we make the call to CancelInvoke and the player will stop flashing. 

Hello GitHub!

So you want to add some source control to your Unity project beyond syncing it to a Dropbox folder? Well it really couldn't be easier, and should be a no brainer for those familiar with Git.

First and foremost, you'll need to change some Editor settings in Unity.

Edit- > Project Settings -> Editor

In the Inspector window, change Version Control Mode to Visible Meta Files. Change Asset Serialization Mode to Force Text.




Then, to create a new Git repository for a Unity project, do the following from a command line. 

Note: This is all assuming you have Git installed and added to your path.

cd C:\Path\To\Your\Unity\Project\Root\Folder
git init
git add .
git commit -m "Hello Git!"

Bada bing, bada boom! That's it. I've been going the lazy (A.K.A. more productive) route and using the GitHub GUI to manage my project's local repo and pushing to GitHub. You'll want to setup a README.md and a .gitignore file as well.

My .gitignore file:

[Ll]ibrary/
[Tt]emp/
[Oo]bj/

# Autogenerated VS/MD solution and project files
*.csproj
*.unityproj
*.sln
*.mp3

One word of advice, if you have any files or assets you know you don't want to be included in your Git repo, make your life easier and setup the .gitignore files BEFORE creating your Git repo. Otherwise, it's a real pain to completely remove sensitive data from a git repo and history. The following SO post will show you how to ignore entire directories, if so you please. 

Wednesday, February 20, 2013

PlayerOutOfBounds

Most likely, you're going to have a player object in your game that you'll want to keep within the boundaries of the screen. When we say "boundaries of the screen", we really mean within the viewport of the camera.

Typically, during your update cycle, you perform a check on the player object current position, and if it's outside of those boundaries, you reset the player's position back within the bounds. The following code snippet is one way of achieving this in Unity, using some arbitrary boundary numbers that represent world coordinates of the bounds of the camera view, that perhaps you happen to figure out while playing and logging your player's position .


This isn't a very robust solution, however. It ignores variances in screen resolution. The world coordinate of 8.0f may happen be the right most bounds of the X-axis at, for arguments sake, 960x600, but not at a higher, or lower resolution. We should also note that the world coordinates will vary based on where you placed your prefabs within the world as well. 

To keep things as resolution independent as possible, and thus as platform independent as possible, do the following instead. Note, I call obtainScreenBounds in my player object's Start method. You don't need to call it every Update cycle. 


obtainScreenBounds makes use of a key method, ScreenToWorldPoint, "which transforms position from screen space into world space". When you pass in a Vector3 of (0, 0, cameraToPlayerDistance) to this method, a Vector3 that represents the bottom left of the screen, you'll get back a Vector3 representing the same location in world coordinates. When you pass in a Vector3 of (Screen.width, Screen.height, cameraToPlayerDistance), you obtain a Vector3, in world coordinates, that's located at the upper right location of the current screen view. Thus, regardless of what the resolution is, you'll always have the correct screen boundaries to Clamp the player's position with. 

Tuesday, February 19, 2013

You're alright, kid

I've (finally) been working through the excellent Walker Boys Unity tutorials. First, I want to give a huge thanks for Walker Boys Studios for creating these tutorials. I'm never a fan of video tutorials (I prefer the faster pace of reading along a tutorial), but these are very well done and nicely paced. So on that note, please support their Kickstarter project, Build a Game!

For the most part, I approached these tutorials by getting an idea of what was trying to be accomplished in the particular tutorial video, and just give it a go by myself. Then, I'd watch the video in full and see how the instructor, um, instructed to achieve the same thing. 

I'm hosting my Unity games on Google App Engine over at Start Press Games. Once I tinker with the games, adding and improving upon them, to a level beyond recognition of the original intention of the tutorial, I'll be posting them on GitHub for others to view and (hopefully) learn from, but more importantly, for me to receive constructive criticisms as well so I can improve and be called out on my mistakes. 

So what are my initial impressions of Unity you say? I know you didn't say anything, you really don't care, but you're getting them anyways!

What I Like
  1. I'm writing little, to no game engine specific code. Coming from primarily XNA game development, this is nice welcome and a huge boost to productivity. As fun as writing a game engine can be, with my limited time, I'd simply rather not bother. I've yet run into any limitations with the Unity engine itself in terms of what I want to do.
  2. True multiplatform. With a few clicks, I'm deploying to desktop, web and Android (I have no Mac machine, else I'd deploy to iOS as well!). Outside of having to implement input for touch control for Android, it's been a breeze to target multiple platforms and various resolutions.
  3. Testing new changes is stupid fast and easy. No need to wait for a full build! Just hit play, and test your changes all within the comforts of the Unity editor. This creates a very nice workflow.
  4. Asset Store. While I haven't purchased anything yet (I plan to purchase 2D Toolkit this weekend), by browsing the Asset Store, I can see that I'll be able to obtain a proven solution to a problem I (will possibly) face. Is that as bad as copying and pasting code from Stack Overflow? I wouldn't say so, but anything that boosts productivity is a good thing. 
  5. The Unity Editor itself. Unity itself is very nice and intuitive and allows for great workflow.
What I Don't Like
  1. MonoDevelop. Oh dear God, is MonoDevelop horrible. In fact, the whole scripting IDE situation of Unity is unacceptable in my opinion. Before you judge, do know that a large portion of my job is essentially writing C code in Notepad++ with only Netbeans 3 as a remote debugger (yes, Netbeans 3), so I constantly experience much worse than what Unity offers. I would use Visual Studio 2010, but I really hate that I have to pay $125 to be able to debug in VS2010, and have a decent IDE setup overall (and UnityVS is still missing a lot of necessary features for true productive coding). 
  2. Scripting Reference Documentation. It could be better overall. They do provide some decent code examples, which is nice, but the overall structure and layout needs improvement (how method signatures are conveyed, for one). A more minor nitpick, but there's no option to default the script examples to C#, over the current default of JavaScript. 
  3. Reflection to determine method implementations. Instead of allowing developers to actually override methods, reflection is used at creation to identify all of the (what should have been traditionally virtual) methods of the Unity engine that the developer implements (such as Update). This was done to optimize performance (only call the methods that are actually implemented), but makes identifying override methods more difficult. The bigger issue is that you can't use IntelliSense to implement the method, and thus need to make sure you have that trusty Scripting Reference web site open and by your side at all times to get the method signatures correct. 
  4. Public variables exposed in designer view. Now, I'm sure many people would expect this as an item on my "What I Like List", and I understand that this feature is so that non-coder designers can change variables with ease, and without looking at code, but since I am a coder, I instinctively change the values within code. However, if the value was ever changed within the editor, changing the value in code is ignored, thus a bug. A lazy solution here would be to never declare public variables.
So those are my initial impressions. Since I'm only interested in 2D game development, next on my agenda is picking up 2D Toolkit and re-creating my old XNA game (Deep Space Dog, video in upper right of blog) in Unity. Once I have that done (ignoring how slow I can actually work, I don't expect it to take many hours at all), I will upload it to Start Press Games and GitHub for all to enjoy and make fun of!

Sunday, February 3, 2013

Translate that for me, will you?

To move an object, specifically the player, you have to translate that object along the X, Y and Z axis. The three basic steps in calculating the translation value are:

  1. Obtain the input value range
  2. Multiply by the desired speed
  3. Multiply by delta time (to allow the object to move based on time and not computer speed)

In Unity, there are several ways to obtain the input value range of the player. You could poll for keyboard or other input events and translate the object along the particular axis that corresponds with the key pressed. For example, in a 2D game where the origin is in the bottom left of the screen (positive Y moves up the screen, positive X moves to the right of the screen),

float playerSpeed = 10.0f;
if (Input.GetKey(KeyCode.W))
{
transform.Translate(playerSpeed * Time.deltaTime, 0, 0);
}
if (Input.GetKey(KeyCode.S))
{
transform.Translate(-playerSpeed * Time.deltaTime, 0, 0);
}

// etc...

This is a very verbose method of getting the player input and translating the player object. A better, more preferred method is to use GetAxis.

float playerSpeed = 10.0f;

float xTranslation = Input.GetAxis(“Horizontal”) * playerSpeed * Time.deltaTime;
float yTranslation = Input.GetAxis(“Vertical”) * playerSpeed * Time.deltaTime;

transform.Translate(xTranslation, yTranslation, 0);


This is a much more concise method. Through Input.GetAxis, we pass in the name of the virtual axis, as setup in the Input Manager, and get returned the input value range (-1 to 1). Horizontal and Vertical are default input settings, which make use of WASD, arrows, and a gamepad left analog joystick.

If the player is holding down the ‘D’ key, Input.GetAxis(“Horizontal”) will return a value of 1. If the player is holding down the ‘S’ key, Input.GetAxis(“Horizontal”) will return a value of -1. Multiplying that by playerSpeed will cause the player game object to move along the X axis in a positive direction by 10 units. Further multiplying this by Time.deltaTime allows us to move independent of the computer speed, so we get same rate of movement regardless of how fast the player’s machine may be.

Monkey In The Middle

I wrote this a few years ago when I was working on a MMO Scrabble game using Google Web Toolkit and Google App Engine. This chronicles the decision the team made for choosing a NoSQL database solution such as GAE. I thought I would add it here for safe keeping. Enjoy!

Choosing a database solution for any application is an extremely difficult, but critical decision. You are deciding the backbone of your application, whats going to be used to capture and store your applications precious data. When viewing all of the options out there, you need to evaluate their strengths and weakness, whether they play into or against the needs of your application. You need to look at your application and attempt to envision the end product. How many users will be using it? Will these users be accessing your data concurrently? Will you have a big fluctuation in the number of concurrent users at a time, say 50,000 users between the hours of 5 -8pm, but only 1,000 users during 6am-11am? How will you handle a large jump of overall users due to the applications increased popularity (hopefully!). In terms of hard disk space, how much data do you expect to store? At what rate will this grow? How complex do you expect your schema to be? Who do you have that can be your database administrator, and what kind of technologies can they manage and administrate based on their current skill set? What are the costs of operating and maintaining the server that will be housing the database? These are just a sample of questions that need to be answered.

Relational databases easily dominate the market. They are the popular choice, and for good reason. They have been proven to be viable solutions for a vast range of applications, from online retail stores to complex AAA MMORPGs. They have great transaction support, providing full ACID qualities. They can handle a high number of users concurrently accessing the same data. They support highly complex SQL queries and data manipulation, allowing developers, designers and business executives to query against their data to build much needed data metrics and analysis, able to answer questions such as “on our big annual 4th of July sale, for years 2003-current, for which item which was available during that date range, was the third best selling in our New York City customer base”, using these reports to further drive their business and sale tactics, or identify inefficiencies and help improve overall sales.

However, relational databases are not perfect. They have their weaknesses, more specifically in the realm of performance, scalability and object-relational impedance mismatch issues. Yet most applications simply choose to work around these imperfections due to a simple case of “the benefits of relational databases greatly outweigh their negative impact on my application”. In the case of massively-multiplayer-online-games (MMOGs), performance, scalability, completely avoiding object-relational impedance mismatch, high quality transaction support and complex SQL reporting are all needed, practically equally so. It’s been said that with regardless of which database solution you choose, you are allowed to only pick two items from the latter list and apparently most developers need highly complex SQL reporting as number one, leaving the second choice up in the air. Historically, however, transaction support tends to be a very close second in terms of desired features.

Cryptic Studios, in the quest for coming up with the perfect database solution, have gone as far as designing and developing their own database management systems, CrypticDB. The CCP Team of EVE Online actually use a single-shard SQL server architecture, and to combat performance issues, have thrown large sums of money into a mind bogglingly beefy server hardware configuration, A.K.A. scaling vertically (more on that later). While throwing money at the problem may be a viable solution to some, for many (start-ups for example) that’s not an option, at least it’s not the best.


From an engineers perspective, relational databases introduce the problem of object-relational impedance mismatch, making it difficult to translate the data that is used in a object-oriented application to store nicely within the relational databases strict structural confinements and incompatible datatypes. In order to store complex data structures, we are forced to develop wrapper classes to do the translation, which if not done perfectly, will have severe impact on performance, possibly so severe that the end user may be affected, like being forced to wait two seconds to loot a green item from a dead zombie. The issue of object-relational impedance mismatch is an extensive one, beyond the discussion of this particular post.


Luckily, we aren’t stuck with just relational database management systems to choose from. Leaving the realm of relational databases introduces the broad range of NoSQL databases. NoSQL are object-oriented based, and in general have the SQL query language omitted (thus the name, NoSQL). There are many different varieties of NoSQL databases, being categorized by their data storage methods. NoSQL databases were developed to primarily solve the issue of scalability, to scale horizontally rather than vertically (CCP, if you remember, decided to scale their database vertically by increasing their hardware configurations). Scaling horizontally means adding more nodes to a system, creating a distributed network. In a distributed environment, data integrity becomes an immediate concern, so the ultimatum of choosing “consistency vs. availability” comes into hand.


NoSQL was also introduced to deal with the issue of object-relational impedance mismatch, being able to handle and store a much wider range of datatypes and structure than a RDBMS could. As an extension, NoSQL also allows handling large volumes of data, quickly retrieving such information, virtually eliminating the need for expensive JOIN operations (which most NoSQL solutions decline to even offer). In essence, NoSQL offers either a high level of data integrity or availability and better compatibility and support for complex and object-oriented data.

When talking NoSQL, we can’t neglect to talk about the CAP theorem, first proposed by Eric Brewer in 2000. The CAP theorem states, in a distributed computing environment, you can optimize for only two of three priorities, Consistency, Availability and Partition Tolerance.



When considering walking down the NoSQL path, the needs of your application will need to dictate, the two priorities that you decide to go with in your NoSQL, distributed database solution. Choosing the correct combination is absolutely critical and choosing the wrong combination could very easily have a dire impact on your business. For example, Amazon claims that just an extra one tenth of a second on their response times will cost them 1% in sales. This should really drive home the importance of correctly choosing your applications database needs. No pressure ;)

There’s simply no “best” database solution to any application. It’s a constant battle of tipping the scales. It’s almost guaranteed that regardless of what solution you go with, it’ll have a negative impact somewhere. The trick is to limit that negative impact or at least direct it towards a aspect that can “afford” to be hit by it, where perhaps the developers can “make-up” for the short coming within the application’s design and implementation.

For WordWars, we knew from the beginning we wanted a database solution that was easily scalable, being able to handle 30 users one day, and scale up to 10,000 users the next day (we could only be so lucky to see a large jump of users like that!), with virtually no mediation required. With an increased user base, we also want to maintain the same high level of performance as our user-base fluctuates, while keeping in mind the fact that we have little to no money to scale vertically. We also need to rely heavily on transactions, due to the nature of gameplay and high probability of having many concurrent users playing at the same time, attempting to access the same data. We need consistency so that every user viewing a single board sees the same thing. We also need a certain level of data integrity. For example, we simply need to be able to gracefully handle the use case of two users attempting to play a word on the same board location, at the same time.


Google’s Big Table database solution attempts to play the middle ground. It is NoSQL based, offering a high level of speed and scalability that relational database tend to fail to deliver, but also provides transactions, a common feature standard in relational databases, but often missing from NoSQL databases. Big Table provides full transaction support, using optimistic concurrency. As you can see from the CAP theorem diagram above, Big Table provides Consistency and Partition Tolerance, which satisfies the needs of WordWars nicely. Yes, we are giving up complex SQL reporting. However, Big Table does use a similar query language called GQL, but at this time it’s still very simple and infant in it’s capabilities, which is more then enough to satisfy what we predict to be our reporting needs.


We are also sacrificing Availability, which may appear to be a huge sacrifice given the type of application we are developing. A failed transaction will happen when using Google’s datastore, for various reasons (timeout, concurrent modification issues, etc). It’s inevitable, it’s the nature of the beast. However, we are developing our application to handle these failures gracefully and greatly limit the impact of any transaction failure. It’s a sacrifice we can make-up for in our application design. If we decided to choose Availability over Partition Tolerance, then we would be losing the latter property completely, with virtually no way of recovering it or making up for it in some other way, at least not easily or efficiently.


Through Google’s App Engine service, we could start using Big Table immediately, have a full NoSQL database at our disposal within minutes with virtually zero effort on our part, leaving Google to the administration duties. Most importantly, we get everything completely free. We will only start getting charged once we have regular users and furthermore once we surpass the free quota thresholds. Since we are in the very early stages of development and prototyping, this aspect alone is enough to drive us toward Google App Engine.