Wednesday, December 18, 2013

Model-View-Presenter architecture for game development, it's not just for enterprise.

I'm an absolute stickler for a good architecture. Nothing irritates me more then spaghetti code with deep coupling between classes, god classes and a lack of any type of modularity. When I begin to develop any application of any type, I immediately start to think how it's going to be structured. Thankfully, there are tons of battle proven patterns to choose from the family of MV*. My favorite as of late has been Model-View-Presenter (MVP).

I don't want to go too deeply about what MVP is and how it differs from other MV* patterns (such as the classic MVC), but here's a quick diagram stolen from Wikipedia.


The key takeaways are that views are dumb and easily interchangeable, presenters contain the business logic, a presenter updates ideally one (but can update more) view, application state lives in the model objects (which are as equally dumb as views). That's all you should really need to know to follow the rest of this post, but please read up on MVP more if you're not familiar with it and understand the difference between MVC.

When I first began game development, I had a hard time structuring my code. I initially couldn't grok how to apply all of the golden rules of regular GUI development to games. Recently, it started to click. You can apply a MV* pattern to games, very easily in fact, and create a clean code base that's organized, maintainable and can be easily changed (we all know how volatile a game's design and feature set can be!). So lets talk about how an MVP pattern can be applied to a Unity code base.

I'm not going to provide any specific code in this post (there's good reason why as you'll see later). This is strictly theory.

Let's say we have a player prefab. Normally, you may just write a bunch of specific scripts to do one thing and one thing only (hopefully) and attach each different script to the prefab. While this does work, I find it chaotic, especially when other scripts need to start talking to each other or one script needs to have its behavior changed slightly for one specific type of prefab. To do things the MVP way instead, we're going to apply a two scripts to the prefab called PlayerView and PlayerPresenter.

PlayerView will represent the View portion of MVP (well, duh!). PlayerView will contain zero game logic. It will strictly be responsible for handling the visual representation of the player, accepting input to pass along to the presenter, and exposing important properties that you may want to have adjustable in the Inspector view of the Unity editor, like health, walking speed, etc. PlayerView will listen for input from the player and pass along the input to the view's backing presenter via events and passing the presenter model objects with the necessary data.

PlayerPresenter will represent, can you guess it, the Presenter portion. Now earlier I said presenters contain the game logic, and in a lot of cases this is true, however I'm going to throw another pattern at you (I'M GOING DESIGN PATTERN CRAZY). Instead of putting all of the game logic for the player in PlayerPresenter, we're going to make use of the command pattern, or a variation of it. PlayerPresenter will be responsible for creating the necessary model objects (based on data from PlayerView) and sending those model objects to Task objects, which handle the actual game logic.

Model objects are very dumb. They simply encapsulate data to pass around. A bunch of properties, nothing more.

Task objects live to do one thing, and one thing very well. They accept model objects from the presenters, do a bunch of work, calculate player score or create a projectile object to spawn, for example, and if necessary sends the results of the work back to the presenter to update the view with. This creates extremely modular, reusable game logic that can be accessed from any presenter that calls it. This allows us to create flat class hierarchies as well, which is a great thing. We could let the presenters handle the game logic and perform the actual work, and in some cases you may, but that game logic isn't easily shared elsewhere and you risk either creating deep class hierarchies to share the logic, or repeating code.

So let's step back and see how a real example would play out. Let's go through an example of a player pressing the shoot button to fire a rocket from his rocket launcher.
  1. PlayerView receives a shoot input signal, notifies PlayerPresenter
  2. PlayerPresenter receives notification of the input, creates a SpawnProjectileModel model object of current player position, direction and weapon type (rocket launcher for this example) to send to the SpawnProjectileTask.
  3. SpawnProjectileTask receives the model object sent from PlayerPresenter, and spawns a new rocket launcher prefab with the data provided via the SpawnProjectileModel model object. 
  4. PlayerPresenter receives notification from SpawnProjectileTask that the rocket spawned successfully and notifies PlayerView.
  5. PlayerView updates its AmmoCount property to deduct one, which updates the ammo count graphics. 
  6. Done!
This may seem like a lot of steps and indirection to deal with to just fire a rocket, but you can easily change the games look and behavior without a ripple effect. Changing the player from a bad-ass marine to a human-hating robot requires you to only change the View class. Enemies can call the same SpawnProjectileTask as the player does and if the game logic should ever need to change, simply update SpawnProjectileTask and both Enemy and Player pick it up without having to touch either. 

Now the reason I didn't provide any actual code and stuck purely to theory is because there is a fantastic framework for Unity to do everything I described so far (plus more), StrangeIoC, which has excellent code samples and diagrams in the documentation and I felt it does better justice. StrangeIoC is an inversion of control, MVP framework. It's fairly new, but I'm using it for Overtime and I don't think I can code a Unity game without it. It's continually evolving and if anything I've talked about in this post is jiving with you, I highly suggest you give StrangeIOC serious consideration. Hopefully, I've convinced you to start considering a MV* type architecture for your next game. 


Tuesday, December 17, 2013

2D Platformer Collision Detection in Unity

NOTE: Please see my addendum regarding this solution, which is ultimately flawed as presented in this post.

Unity is a 3D engine that comes with built-in physics engines (PhysX for 3D, Box2D for 2D). However, if you're aiming to develop a 2D platformer, you'll quickly find that it's extremely difficult, I'll go as far as to say impossible, to achieve that "platformer feel" using these physics engines. For your main entities, you're going to have to roll a variation of your own.

Furthermore, if you attempt to use the supplied character controller package for your player in a 2D platformer, you'll also quickly discover that the collision detection and overall controls just don't feel right, no matter how hard you tweak it. This is primarily due to that the character controller package uses a capsule collider, which makes pixel perfect collision detection on edged surfaces problematic. So once again, you need to roll your own controller and collision detection system.

Since Unity is a 3D engine and regardless of the type of game you're developing (3D or 2D), you're game is being developed in a 3D space. You probably can achieve some canonical tile-based solutions for collision detection (perform check-ahead on the tile the player is heading into, and determine appropriate collision to take, if any, or example), but it's best not to wrestle against the engine. The best solution I've found is to use ray casting.

Ray casting seems to refer to different things (see Wikipedia), but the ray casting I'm referring to is the method of casting rays from an origin towards a direction and determine what intersects the ray, if anything. We can use this method to handle collision detection, casting rays from our player in both the x and y axes to learn about the environment surrounding the player and resolve any collisions.

The basic steps of the algorithm is as follows:
  1. Determine current player direction and movement
  2. For each axis, cast multiple outward rays
  3. For each ray cast hit, alter movement values on axis
I want to note that a lot of the following code is originally based off of the fantastic platformer game tutorial found on the Unity forums. However, I've heavily modified it to fix some bugs, mainly with corner collisions, which we'll go into depth.

Here's the entire class that performs the ray casts for collision detection.

It's important to note that this class is called inside of a separate entity controller (BasicEntityController) that handles calculating acceleration and creating the initial movement Vector3 object. BasicEntityCollision takes the movement and position Vector3 objects and adjusts them based on any possible collision detected from the ray casts.

The Init method does some one time initialization of required fields, such as setting reference to the controlling entities BoxCollider, setting the collision LayerMask, etc.

The Move method accepts two Vector3 objects and a float. moveAmount is, as the name implies, the amount to move before collision detection, as calculated by BasicEntityController. position is the current entity position in the game world. dirX is the current direction the entity is facing.

Move will determine the final x (deltaX) and y (deltaY) values to apply to moveAmount after all collision detection. Move starts the ray casting along the y-axis of the entity, followed by the x-axis of the entity, but only if they are moving left or right; we won't cast x-axis rays when the entity is idle. We then set the finalTransform based on deltaX & deltaY and return it so that the entity can finally use it to Translate!

Let's dive into the two key ray casting methods, yAxisCollisions & xAxisCollisions. First, note that both methods perform at least three different ray casts along the entities BoxCollider, on each axis. This allows us to get complete coverage for the entity.

Each line represents a ray cast.

yAxisCollisions starts by determining which direction the entity is currently heading along the y-axis (up or down), and calculates separate x and y values to be used to create the Ray objects to be casted along the box collider (from left to right, top or bottom). yAxisCollisions calls two different for loops based on which way the entity is currently facing. If we are facing towards the right, it'll start the ray casts on the right side of the entity, else it'll start on the left side of the entity. This was done to prevent a bug that saw the entity falling through the collision layer when moving to the right and downward due to a gap that was being created (because we break the for loop after the first ray hit we encounter) when the entity would collide with the corner of a tile.

When Physics.Raycast returns true, that means a ray cast has hit. We obtain the distance of the hit from the ray origin, and calculate a new deltaY to apply to the final move transform. We pad this value slightly to prevent the entity from falling through the collision layer accidentally.

We pad the deltaY value slightly to keep the entity above the collision layer, to avoid accidental fall throughs.

With our deltaY calculated, we move on to the x-axis (again, only if the entity is actually moving along the x-axis). xAxisCollisions is very similar to yAxisCollisions, but simpler. We don't worry about which direction the entity is facing, instead we worry whether the entity is either moving on the ground or currently in mid air. If we are moving mid air, there's a high risk of landing on the corner of a tile, which we could fall through. To help prevent that, we cast a larger range of ray casts along the x-axis (4 instead of 3) with the outer 2 rays that are casted slightly outside of the box collider width. When a hit is detected on the x-axis, we simply set our deltaX to 0 and return it.

When moving through the air, we cast a wider range of rays slightly outside of the entity's boxCollider width.


And that's the magic behind using ray casting to perform collision detection. The finalTransform will be sent to the entity to be used with the Translate method. Here's a small video clip showing the Debug.DrawRay calls. When a ray is hit, it's colored yellow.


For further reading on the topic of a 2D platformer controller for Unity, there's an excellent blog post on Gamasutra by Yoann Pignole that goes into great detail.

You can also see a more complete implementation of the collision detection code at the following Gist.

Friday, December 13, 2013

Overtime Developer Log 0

Overtime (working title) can be best described as a Unreal Tournament demake. For my Ludum Dare 26 entry (http://mindshaftgames.appspot.com/games/LD26/ld26.jsp), I asked myself “what if Unreal Tournament was made for the Atari 2600?”. The game, while very simple (hey, I only had 48 hours to make it!), was a lot of fun. I decided to take the game idea a bit further and ask myself “Okay, but what would Unreal Tournament look like if it was made for the Super Nintendo?”, and that’s essentially the game I’m looking to create.

Most of my game development experience is in either XNA or ImpactJS, but I decided to use Unity this time around, mainly for the multi-platform support, but I quickly learned there are tons of other benefits to using Unity too. That said, I had to learn how to create a 2D game inside this 3D engine. Luckily, there are fantastic 2D plugins to help with creating and managing sprites, to which I’m using 2D Toolkit. The biggest hurdle (initially) was how to handle collision detection and platformer style physics. While Unity can handle these problems out of the box, they are ill suited to achieve that true 2D platformer feel in my opinion, so after checking out a few articles and tutorials on the subject, I rolled my own using the battle proven raycasting method. 

Early tech prototype


With that in place, I began working on a prototype of a basic deathmatch mode inside a small, single screen arena. I unfortunately don’t have any videos of the initial playtest, but the results were very positive, resembling the intense, twitch based experience of a Unreal Tournament deathmathch or even a Super Mario Bros. Battle game. Thus, I felt it was worth fleshing out a full design and moving forward with it.

On a sidenote about Unity, I found it initially difficult to create an organized code base (a common complaint about Unity scripting, I’ve discovered). I scripted my prefabs in a somewhat traditional Entity based system, very similar to how ImpactJS is structured. I made some decent progress before deciding to rewrite the entire game using StrangeIOC (http://thirdmotion.github.io/strangeioc/), a MVP-like inversion of control framework. Ignoring the benefits of inversion of control alone, it does help enforce excellent code structure, separation of concerns, event handling, all while preventing deep class hierarchies. If you’re a Unity developer, I would check it out.

This is the game in its current form. 
Gameplay footage


Some (boring) screen shots



I only have one playtest map setup but there are a few weapons available, support of split-screen cameras, players can double jump, energy (ammo) packs, level triggers (one which causes an earthquake) and first blood & double kill checks, to summarize. There are more weapons I need to implement, as well as the expected game modes of CTF, team deathmatch, but I also have other unique game modes in mind. For assets, I’m currently using open source sprites I find and I’m using bfxr to generate all sound effects. Once I’m further along development, into an alpha stage, I’m hoping I can capture interest from actual sprite artists and composers. 

Moving forward with development, the two big questions I have to consider are do I include network multiplayer and do I include bots? These are two areas I have no experience in, and I’m risking blowing the game out of scope. Regarding networking, from what I’ve gathered so far on the topic, I should make a decision to include it or not now, rather than try to shoehorn it in later (many things need to change, from instantiating objects, to communicating player input). Since I’ve already made fair progress on the game already, and already went through one total rewrite, I know I’d have to rewrite a majority to get networking involved, since I’ve foolishly waited too long to consider it. Thus, I’m leaning towards not including networking. Yes, this does come off as “lazy” on my part, I admit. In my defensive, multiplayer indie games generally don’t have a large enough pool of players after immediate launch to sustain a multiplayer community anyways (see Gun Monkey).

Bots are something I do want to include, though it’s a huge engineering effort. I essentially need to create Quake 3 Arena level of sophisticated AI (you can read Jean-Paul van Waveren’s thesis paper on Quake 3’s bots http://fd.fabiensanglard.net/quake3/The-Quake-III-Arena-Bot.pdf) in order to make it work. It’ll be a lot of work, tons of trial by fire, but I’d love to be able to provide a singleplayer experience as well, since I’ve personally spent the majority of my Unreal Tournament time playing against bots! Outside of finite state machines and very basic behavioral trees, AI is a new frontier for me, but I’m eager to face it. The biggest problem I see ahead is getting bots to navigate their environments.

So that’s my current progress right now. I’ll probably be tackling bots next before continuing to implement all of the planned weapons and game modes, because I hate huge problems nagging at me and would love to solve it right away. Plus, it’s becoming increasingly difficult to playtest a multiplayer only game! I’m only working on this (barely) part-time, so progress will be slow, but hopefully at a steady pace.

Friday, October 18, 2013

* R E D T H R E A D H I J A C K *

One of the most influential books I've ever read as a developer is Masters of Doom. It's an inspiring story for any game developer or entrepreneur.

Jeff Atwood just made an equally inspiring post about id Software's story, You Don't Need a Million Dollars. He's right. When I was a filmmaker so many years ago, the same message was drilled into me. There are no barriers anymore for aspiring creative minds to create something great. Sure, you may have that full time job, but there's always time afterwards to chase the dream, and the resources available now are unprecedented. I want to say overly so, as to lose the whole "art through adversity", but I've been told that's too cliche to say...

Take the following excerpt from Masters of Doom;

Carmack turned red. “If you ever ask me to patent anything,” he snapped, “I’ll quit.” Al assumed Carmack was trying to protect his own financial interests, but in reality he had struck what was growing into an increasingly raw nerve for the young, idealistic programmer. It was one of the few things that could truly make him angry. It was ingrained in his bones since his first reading of the Hacker Ethic. All of science and technology and culture and learning and academics is built upon using the work that others have done before, Carmack thought. But to take a patenting approach and say it’s like, well, this idea is my idea, you cannot extend this idea in any way, because I own this idea—it just seems so fundamentally wrong. Patents were jeopardizing the very thing that was central to his life: writing code to solve problems. If the world became a place in which he couldn’t solve a problem without infringing on someone’s patents, he would be very unhappy living there.

I've always felt the exact same about patents, but we have to consider the world we live in today. Patent trolls are real, and if you don't protect yourself and your business by filing patents, you may find yourself waking up from your dream very, very quickly. Understand why Carmack said this in the 90s, and understand why anyone should file a patent today.

Oh wow, this thing still works??

It's been awhile, seven months to while. A lot has happened during those seven months. I did some game jams (http://mindshaftgames.appspot.com/games/LD26/ld26.jsp), switched technologies, and started developing a game that will truly be my first real, full release.

After Ludum Dare 26, I began to really think about my next game, which I wanted to be my first real release. A full, complete, polished game that people would (hopefully) pay money for. This required me to really start growing my game design and project planning skills. I had an initial idea of doing an elaborate Metroidvania game in ImpactJS (can you immediately see the impending failure?). I created a bunch of design docs and even prototyped something in ImpactJS but ultimately killed it. It was too ambitious and to be honest, there was another genre of game that I simply would like to play much more. We'll get to that in a minute, but let's talk about ImpactJS more.

ImpactJS is a great, fantastic game engine. As new as HTML5/JavaScript game development might be, ImpactJS showed me that it's a very viable solution for developing a tile based 2D game. Not only can it be deployed to the web with no plugin (sweet!) but to mobile and even as a desktop .exe through Node Webkit , which I got up and running and was very impressed with; the performance is incredible.

All that said, I ultimately decided to switch (back?) to Unity. There are few reasons why I did, the main one being the JavaScript language itself. Firstly, I greatly enjoy first class functions (I truly feel functional programming is on the rise due to its implications with concurrency, but that's another blog post in itself), but if you give me the choice of a dynamic or static language, I'm taking the static language. And for the scale of the project I had envisioned, doing it in a dynamic language seemed crazy. Doable, absolutely, but crazy.

Also, Unity handles a few things better, primarily networking, split-screen cameras and gamepad controllers. ImpactJS can technically handle gamepads through the Gamepad API (which I did get implemented and working in the prototype), and networking through WebSockets, but right now, all of that is a lot of trouble and kinda hacky. Also, no split-screen camera support is a deal breaker, and while I never tried to actually implement them, from what I gather from the ImpactJS documents and source code, it doesn't seem possible.

And to be completely honest, the other reason for switching to Unity was simply because I tried doing a 2D platformer Unity in the past, but ultimately failed to figure out collision detection properly and couldn't grasp how to doing things "The Unity Way", as I've coined (just look a few posts back). That nagged the shit out of me. I had to go back and figure it out. It was a defeat I couldn't leave alone. I ultimately did begin to grasp the concept of using Raycasts to detect collisions, and started to understand the Component-Entity system that Unity enforces. It must sound crazy that this was a driver for me dropping ImpactJS, but there was a problem that I couldn't initially solve, and it bugged me to no end.

Also, with my original Metroidvania idea, I was aiming not only desktop, but mobile, and ImpactJS didn't provide ideal performance on Android through CocoonJS. This is my own individual findings, and it could be due to my shitty code, but I've experienced better performance in Unity, so that was another deciding factor. All that said, I've ultimately decided to drop porting to mobile with my new game design.

So what exactly have I've been developing the past several months?

 *drumroll*

I don't have a title yet. *fart sound*. However, imagine if Unreal Tournament was developed for SNES. That's essentially the game I'm out to make. A 4-player arena platformer. This genre has seen a recent influx with games like TowerFall, Gun Monkey, The Showdown Effect and Atomic Ninjas. I've only played Gun Monkey and The Showdown Effect so far, and watched many videos of TowerFall (I will buy and play the PC release when it comes out!), but they seem to offer a much different experience than what I'm aiming to provide. I want to recreate that intense, 90s/early 00s competitive FPS experience, but in 2D platformer form. I've done an initial playtest with some close friends recently, and I seem to be on the right track! There was tons of shouting, screaming, and intense action.

I want to get into further technical details of what I'm doing in Unity, as well as some creative details, but I'm going to save all that for future (near future, I promise) posts, because there's a lot to talk about. Even though this is exactly what this blog may seem like so far, I hate opinions, give me facts with insights. I'll be sure to give you plenty of facts with some crazy insights. I hope you find it useful.

Monday, March 18, 2013

We have Impact

I've decided to kick Unity to the curb. After using it for a few weeks, attempting to create a very simple 2D sprite based platformer using 2D Toolkit, I've come to the conclusion that it's not the best tool set to use for that type of game. There's simply too much wrestling involved with the Unity engine to complete even the most simple of tasks, for example tile based collision detection. Perhaps it's my inexperience, but when the toolset I'm using is hindering my productivity and progress, rather than boosting it, I tend to reevaluate the situation.

So the hunt for a new game engine/framework/toolset/etc began! There were a few criteria that I needed to have satisfied.
  1. Deployable to the web (option to mobile is nice to have as well)
  2. Good documentation and community support
  3. Remove the need to write the majority of boiler-plate, 2D game engine code, yet allow me to extend it as well
I had my eye on ImpactJS for awhile, and after determining what my needs were, it seemed to be the best choice (even with the license fee). And so far, it absolutely has been. You can see what I've managed to cobble together in only about two weekends worth of time. I'm going to reserve a full "review" of ImpactJS for another post, but I'm extremely happy with ImpactJS. Worth noting, I also get to use my favorite IDE, IntelliJ IDEA! Nice :)

Oh! And I've decided to join One Game a Month. My March entry will be a simple Lode Runner-like (linked above). Obviously, it's very much a work-in-progress as of this posting, but I fully expect to have it finished, and somewhat polished, for the March deadline. 

Tuesday, February 26, 2013

Get to the script

Related to my last blog post regarding accessing setter methods of a prefab script, originally, I was accessing the script through a class field of type Transform, which was attached to a enemy bullet prefab. To get access to the script, I needed to call GetComponent from the returned reference by Instantiate. See the Gist below.



From what I've been told by internet anonymouses, GetComponent is an expensive call to make, especially given the high frequency that fireBullet gets called. A better alternative? Change the class field type from Transform to the script class (EnemyBullet, in my case).



Now you avoid the GetComponent call and get a direct reference to the script. This works in my case because the Enemy class only really needs reference to the EnemyBullet script of the particular prefab. It doesn't use the prefab's transform or other components. If you have a case where you're accessing multiple components of a particular prefab, then you can't avoid the calls to GetComponent. However, figure out which component you use the most and set that as the class field type, to reduce the number of GetComponent calls you make.