Monday, November 17, 2014

Yet Another Addendum: 2D Platformer Collision Detection in Unity

This is yet another addendum to my previous addendum to my original article on creating 2D platformer collision detection in Unity using raycasting. There was a last remaining bug in the system in which the player would get snagged on a corner tile, effectively getting stuck until they jumped or moved in the opposite direction. This is still a huge improvement over just falling through the tile completely (which was the case originally), but still annoying and needed addressing.

You can see the bug in the video below. Just fast forward to the 1:07 mark and watch the Angel in the lower right corner.



It actually took me a long time to come up with a fix for this, even though now it's such a simple and obvious fix. Without reiterating all of the details of the collision system, I essentially cast rays towards the directions the player is moving. If the player is moving left, I cast evenly spaced horizontal rays from the box collider (4 in this game's case). To cover the corner of the box collider, I use a margin variable to cast ever so slightly out of the box collider bounds; refer to my previous posts on this for further detail.

This was causing the snagging issue because while the player's box collider wasn't actually colliding the tile (and it doesn't really appear to be either in the game), a collision was still being detected due to the margin. By itself, this isn't too bad, but we'd also get a y-axis collision detected, and this, along with a x-axis collision, would cause the snagging/jittering effect.

The ideal solution is to reduce the margin of the raycasts so that we're not casting outside of the box collider, and still guarding against corner collisions. Thus, we introduce diagonal raycasts from the corners of the box collider!

Here is the new code in all its glory.
Note the Move method and when we perform the diagonal raycasts. We only want to perform them when the player is moving through the air, by checking that the player is moving in both x and y axes, not in a a side collision nor on the ground. We then perform a simple raycast (always with the origin in the center of the collider) in the direction the player is moving. When a corner is hit, we simply stop the x-axis movement.

A simple solution to a problem that was haunting me for a while.



Wednesday, October 15, 2014

Duke Nukem 3D - Game Tutorial Through Level Design

Some of the best games use level design to teach the player the mechanics of the game and how to ultimately play, instead of relying on the dreaded tutorial that forces the player to play through before actually getting to the game. One of the classic examples of this is World 1-1 in the original Super Mario Bros. The intro to this level has been analized to death, so I'm not going to beat that dead horse. Instead, I want to look at one of my favorite games of all time, Duke Nukem 3D, and how the mechanics of the game is taught to the player through the level design of Hollywood Holocaust.

Before we dive in, we need to remember that Duke Nukem 3D was released almost 19 years ago (damn, we're all getting old :( ). The first person shooter genre was just being born out of prior games such as Wolfenstein 3D and Doom. Duke Nukem 3D, at the time, was a large leap forward in the genre, offering true 3D play as players could traverse the Y-axis through jumping and jetpacking and incredibly expansive, detailed levels and interactivity. Duke3d needed to let the player know this isn't Doom they were playing!

We're going to walk through just the first area of Hollywood Holocaust, and how the level design is used to teach the player about the new mechanics available to him, both as a player who's played Doom, and a player new to the FPS genre entirely.



The game starts with Duke jumping out of his ride (damn those alien bastards!). Immediately, Duke is airborne. He doesn't start grounded, letting gravity pull him down the Y-axis. This immediately tells the player that there's a whole new axis of gameplay available to you. You will not be zipping around just the X and Z axes. This is further emphasized by the fact that you land on a caged in roof top. There's only one place to go but down!

The player is left to roam the enclosed rooftop. The rooftop is seemingly bare at first, but rewards the player for exploring beyond the obvious path with some additional ammo hidden behind the large crate. Exploration and hidden areas is a large part of Duke3d's gameplay, and this is a subtle, yet effective way of communicating that to the player.



Next, the player will come across a large vent fan, taped off, with some explosive barrels conveniently placed next to it. The game literally cannot continue until the player figures out the core mechanic of the game, shooting. Not only is the mechanic of shooting being taught, but also the mechanic of aiming at your target. This is all done at a leisurely, comfortable pace for the player. Imagine if that there was an enemy guarding the air vent? For a player new to the genre (and back in 1996, it was very common to have someone play this game who's never played a FPS before, not even Doom), it would have been very overwhelming and probably a guaranteed player death.

Once the player figures out aiming and shooting, they also are taught another core mechanic of the game, puzzle solving. Solving little environmental based puzzles will be common going forward, so the player needs to be taught to be aware of their surroundings and understand it is interactive and interactivity will be key to success.

It's fascinating to me that the core of the game is taught to the player in such little time, with such seemingly simple level design.


Thursday, August 28, 2014

Using Jenkins with Unity

Going off my last post where I used a batch script to automate Unity builds, I decided to take it a step further and integrate Jenkins, the popular CI sofware, into the process. 

With Jenkins, I can have it poll the Demons with Shotguns git repository, and have it detect when changes are made, in which it'll perform a Unity build (building the Windows target) and archive the build artifacts.

What's great about this is that I can clearly see which committed changes relate to which build, helping me identify when, and more importantly where, a new bug was introduced. 

image

I currently have it set to keep a backlog of 30 builds, but you can potentially keep an infinite number of builds (limited to your hard drive space, of course). 

So how do you configure this? Assuming you have Jenkins (and the required source control plugin) installed already, create a new job as a free-style software project. In the job configuration page, set the max # of builds to keep (leave blank if you don't want to limit this). In the source code management section, set it up accordingly to whichever source control software you use (I'm using git). This section, and how you set it up, is going to vary greatly depending on which source control software you use.

Under build triggers, do Poll SCM and set the appropriate cron job syntax based on how frequently you want to poll the source repository for changes. 

Under the build section, add a Execute Windows batch command build step. You then script up which targets you want to build (you can use the script in my previous post as a template). 

Under post-build actions, add Archive the artifacts. In the files to archive text box, setup the fileset masks you want. For a standalone build it would look like "game.exe,game_Data/**". 

That's it! I do know that there is a Unity plugin for Jenkins that'll help run the BuildPipeline without having to write a batch script but I never had success in getting it running so I just went this route. 

Automating Unity Builds

I wanted a way to automate building Unity projects from a command line, but to also commit each build to a git repo so I can keep track of the builds I make (in case something breaks, I can go back to previous builds and see where it might of broken). This is my poor man's CI process.

Here's the script in all its glory.

As you can see, it's nothing special. Simply plug in the path to your project and where you want to the exe to spit out. Adding other build targets is trivial as well. Best part of this, it doesn't require the Pro version of Unity at all!

This solution is temporary. I'm going to wrap this in Jenkins so that it'll detect git commits then build and archive the game's .exe. More on that soon!

Monday, July 7, 2014

How to program independent games by Jonathan Blow - A TL;DR

On April 1, 2011, Jonathan Blow gave a presentation at UC Berkeley's Computer Science Undergraduate Association entitled How to program independent games. Thankfully, it's available on YouTube. This talk is insightful not just for the indie game dev, but for any software developer in general. Please take the time to view it.


This is a talk about productivity, about getting things done, more than anything. Programmers that get their computer science degree are often taught how to optimize code, but this generally comes at a great expense to productivity. Indie game devs have to wear several hats (if not all of the hats), so time is too precious to be wasted.

There are several examples Blow goes through to illustrate this, some of which I think are very weak arguments given modern APIs (I'm referring to his hash table vs. arrays argument), but the majority of them are spot on. One that stood out in particular is the urge to make everything generic, when it may not be necessary. More often than not, a method you're writing will be a one off, only used to perform some type of action on one type of object, so time is absolutely wasted trying to make that method work on an entire hierarchy of objects.

The biggest take away is simply this: the simplest solution to implement is almost always the correct one. Get it done. Move on. Fix it/optimize it only when you absolutely need to.

Monday, April 21, 2014

Creating a flexible audio system in Unity

Unity's audio system isn't without its disadvantages. One of it's major issues is that a single AudioSource can only play one audio clip at a time. You may say "well, that kind of makes sense" but why not fire a background thread for each play request?

Playing only one audio clip at a time can be a problem in scenarios where you have an AudioSource attached to a prefab, your Player for example, and you have multiple audio clips you'd like to be played in succession. Your player jumps, so you want to play a jumping sound effect, but within the time that jumping sound effect is playing, they get hit by something, so you swap out the audio clip and play a player hit sound effect, but if you're trying to use a single AudioSource, that'll cut off the currently playing jumping sound effect. It'll sound bad, jarring and confusing to the player. Most obvious solution is to simply attach a new audio source for every audio clip you'd like to play. That may get nightmarish if you end up having a lot of possible audio clips to play.

My solution has been to create a central controller that'll listen for game events to spawn and pool AudioSource game objects in the scene at a specified location (in case the audio clip is a 3D sound), load it with a specified AudioClip, and play it, and return the instance back to the object pool for later use. This allows you to play multiple audio clips at a single location, at a single time, without cutting each other off. You also get the benefit of keeping your game prefabs clean and tidy.

I'm always reluctant to share my code because I use StrangeIoC, which not everyone is using (though you probably should!) and the code structure may seem alien, but a keen developer should be able to adapt the solution to their needs. Let's go through a working example.

I've attempted to comment this Gist well enough so that people who aren't familiar with StrangeIoC can still follow along. The basic execution is


  1. Player is hit, dispatch a request to play the "player is hit" sound effect
  2. This is a fatality event, dispatch a request to also play the "player fatality" sound effect
  3. PlaySoundFxCommand receives both events
  4. For each separate event, attempt to obtain an audio source prefab from the object pool. If one is not available, it will be instantiated
  5. If the _soundFxs Dictionary doesn't already have a reference to the requested AudioClip, load it via Resources.Load and store reference for future calls
  6. Setup the AudioSource (assign AudioClip to play, position, etc)
  7. Play the AudioSource
  8. Start a Coroutine to iterate every frame while the AudioClip is still playing
  9. Once the AudioClip is done, deactivate the AudioSource and return it back to the object pool

With this system, you never have to worry about audio clips cutting each other off, everything is centralized, and you don't have to manually manage the different possible AudioSources. However, you do need to keep an eye on memory usage as we are pooling and saving reference to a lot of resources, which may hinder how well this system can scale. A possible improvement is to limit the number of AudioClips we save reference to in the _soundFxs Dictionary, and when that limit is reached, remove an entry. You could go as far as to figure out which sound effect are least used, and remove those first. 









Thursday, March 27, 2014

Reducing Your Game's Scale: Save It For The Sequel!

It seemingly happens to almost every designer, novice or veteran, AAA and indie. You have that grand game idea. When you close your eyes, you can see it being played before you. The expansive levels, the fancy weapons, the beautiful graphics and audio, all of the little polished details. Then you snap back to reality and (hopefully) quickly realize the scale of your dream game is simply too large.

Time to start cutting!

This may seem terrible and disappointing, but just because you're cutting features and making compromises doesn't mean the initial vision is completely lost. Firstly, you're helping to ensure that you actually get to release your game. This is the ultimate goal, and everything should be done to ensure this (oh, but do make sure you provide a great experience!). Take the ideas you're cutting, write them down somewhere, as they are not lost forever.

Save them for the sequel!

Obviously, Overtime had a much grander scale when I first started designing compared to what it's been boiled down to now. Even after I write this, I'm sure more things will be cut. Being a solo developer working on his first actual commercial release, the scale needs to be small to ensure I actually release something. However, everything that I am cutting I do want to see come to fruition, so it's being saved for a sequel game, which I hope I get to work on given that the first title is received well enough to warrant it. 

So don't fret, don't fear! Cut, cut cut. You'll see an amazing thing happen, something that you probably couldn't imagine be happening. Your game becomes better for it, and you actually increase your chances of releasing. Not a bad trade off at all. 

Friday, March 21, 2014

The Importance of Player Feedback & Subgoals: Playtest Results - 03/14/2014

I had a quick playtest session with some close friends the other day. I've been making good progress on the game (which is yet to be titled, but lets refer to it by its working name, Overtime). I'd just added vertical axis shooting which I wanted to playtest and I also needed to get some real world QA done as testing a local multiplayer only game is proving difficult.

Here is a clip from one of the recordings I took.


Importance of Feedback

My biggest initial take away is the importance of player feedback for even the smallest actions. From jumping to obtaining a frag, there needs to be feedback provided. The player not only needs to be given feedback to assure his actions are executed, but to be rewarded for the things he does and make them worth doing again. Feedback can be provided in numerous ways, from elegant sprite animations, to subtle or acute particle effects. A small, brief dramatic sequence to a frag can make the frag all the more rewarding, thrilling and special as so awesomely done in Samurai Gunn.
A player swats back another players bullet for a kill in Samurai Gunn.
I quickly added some rudimentary blood splatter particles the day before the playtest to help provide feedback, but I feel it wasn't enough and lost its novelty very quickly. I've since tweaked the blood splatter to project based on the direction of the fatal projectile, tiles around the player corpse become bloody, added  flying gibs and even created dramatic John Woo like slow motion effect when the player is killed. All of these small layers of feedback will hopefully make obtaining player frags more rewarding beyond just increasing a frag score. Simply firing projectiles at other players may be fun at first, but if the overall presentation of the actions is bland and boring, player won't be interested in playing for long.

Fragging a demon with a shotgun in Overtime


Importance of Subgoals

Currently, Overtime has only one goal, kill all other players. There is very little else the player needs to focus on or worry about. This is a problem, as the game gets boring quickly. Once you've killed the other players a handful of times, you've experienced all that there is to offer and lose interest in playing any further.

It could be argued that platforming (successfully negotiating jumps to make your desired mark) and ammo management (collecting ammo packs to ensure you always have ammo) are also subgoals, but I feel they are too subtle. This just may be the nature of simple deathmatch mode in general; I do plan to add other game modes which will add more exciting subgoals for the player I'm sure.

Samurai Gunn has environmental hazards and destructibles. This give players more subgoals, avoid accidental deaths and shape your environment to your advantage (you can destroy certain tiles to the point where they become hazards). Players in Samurai Gunn can also engage in defensive actions, engaging in mini sword fights to parry player attacks and swatting back player bullets. This not only gives players a grander sense of control over their ultimate fate, but an entirely different set of actions and required skills.

This was a great round of playtesting and really highlighted serious gaps in Overtime's design, which I'll need to address. The above GIF of Samurai Gunn does such an incredible job of summing up the entire game, its mechanics, the level of polish and feedback, goals and dimensions in just under a second of gameplay. If you need longer than a second to capture the total essence of your game, you should step back and start rethinking your design.

Friday, March 7, 2014

Addendum: 2D Platformer Collision Detection in Unity

NOTE: Please see Yet Another Addendum to this solution for important bug fixes. 


This is an addendum to my original post, 2D Platformer Collision Detection in Unity. The solution explained in that post was a great "start", but ultimately had problems which I'd like to go over and correct in this post, so that I don't lead anyone too astray!

The Problem

To summarize, entities could fall through collision layers under the right conditions. The most notable use case was corner collisions at low framerates (< 30FPS). 


In the picture above, the white lines represent the raycasts. The green box is the box collider. As you can see, that small corner gap is enough to cause the player to slip through the collision layer. As noted before, this issue was highly prevalent when the frame rate was < 30FPS (I capped the frame rate using a small script to take into consideration people with slower hardware). For the more astute Unity users, I can hear the screams of "FixedUpdate! FixedUpdate!" Just trust me that using FixedUpdate had no effect on this issue. I tried, in several different manners, and all resulted in the same problem.

The Solution

Originally, the rays were being casted from the edges of the box colliders instead from within the box collider. Why is this important? I'm going to borrow a diagram from Yoann's Pignole Gamasutra article.

Source: Gamasutra: The hobbyist coder #1: 2D platformer controller by Yoann Pignole
This makes sense, but why would I only have an issue at slow frame rates? To be honest, I still don't know. I'm guessing it had to do with the speed in which raycasts were executed (which you would think calling within FixedUpdate would fix, but didn't). 

Ultimately, I rewrote the system completely, ensuring the rays were casted from the center of the box collider, and when resolving the collisions, moving the entity to the point of collision. This solution completely removed all issues I noticed. 

This provides a much cleaner, robust solution. I can also more easily adjust the number of rays I cast, as well as allowing to cast the rays outside of a margin, if need be.















Tuesday, January 14, 2014

Single Camera System For Four Players

One design decision of Overtime I've been currently facing is "how big should the game maps be (single screen or multi-screen)?" Small maps allow for focused, more intense battles while limiting game mode possibilities, while larger maps allow for more gameplay and game mode varieties. Small, single screen maps would require only one camera to capture all players and playing environment. Larger, multi-screen maps would require multiple cameras (one for each player) that can follow a target. Since Overtime is a local-multiplayer game only, this also means split-screen cameras.

Why not have both? It's pretty trivial to have maps that anchor the camera to a single spot for small maps, while allowing split-screen cameras for larger maps. After some playtesting, I found the split-screen cameras pretty annoying due to the small screen real estate they provided for each player. I didn't want to scrap the idea of big maps entirely. So, is it possible to create a single camera that can follow up to four different targets? I soon realized that the type of camera I ended up needing is a camera in the style of a fighting game.

Fighting games, such as Super Smash Bros. or even wrestling games, feature single screen cameras that track multiple targets, zooming in and out as the targets get closer and farther away from each other, respectively. This is done by some vector math magic (it's not really magic, as you'll see).

So let's go through the requirements of the camera system
  1. Follow up to four targets, always having them within screen view at all times.
  2. The camera should always be focused on the relative center of all four targets.
  3. As targets move farther away from each other, zoom camera out an appropriate amount of distance.
  4. As targets move closer to each other, zoom camera in, clamping the zoom factor to a specified amount.
From that, we can immediately deduce we need to know the following things
  1. Based on all targets current positions, what are the minimum and maximum positions.
  2. What is the center point between the minimum and maximum positions.
  3. How far do we need to zoom to keep all targets within view.
So far, maybe you've realized that it's impossible for a camera to follow multiple targets; the camera must always be fixed on one point to follow. Obtaining the minimum and maximum positions will allow us to later find the center point between these positions, which we will use as the single point the camera will follow. By following this point, we know the camera will be relatively center of all targets. Here is the pseudocode for obtaining the minimum and maximum positions:

List xPositions;
List yPositions;
foreach target {
    xPositions.Add(target.x);
    yPositions.Add(target.y);
}
maxX = Max(xPositions);
maxY = Max(yPositions);
minX = Min(xPositions);
minY = Min(yPositions);

minPosition = Vector2(minX, minY)
maxPosition = Vector2(minX, minY)

We need to obtain the x and y coordinates of all targets, and store them in a List. We then find the maximum (x, y) and minimum (x, y) values of each to give us our final minimum and maximum positions (Unity has these Max and Min methods in the Mathf class, but you could implement your own easily if needed).

Let's add some diagrams to help visualize this better (the scale is all wrong, I know, but bare with me!).


The smiley faces represent our three players, and their positions. Following the above pseudocode, we come up with a min of (8, 7) and a max of (31, 14). This gives us the outermost coordinates of the area our players are in.

To find the center of these two positions is a trivial step. Simply add the Min and Max vectors, and multiply by 0.5 (favor multiplication over division for performance reasons).


((8, 7) + (31, 14)) * 0.5 = (19.5, 10.5)

Great! We now have the target position that our camera will use to follow. This position will update as our players move, ensuring we're always at the relative center of them. But we're not done just yet. We need to determine the zoom factor.

Quick side note about the zoom factor. When developing a 2D game, you normally use 2D vectors (as we've been doing so far) and an orthographic camera, which ignores the z-axis (in Unity, not really, but the depth is used differently as objects don't change size as the z-axis changes). If you were developing a 3D game, you'd be using 3D Vectors and a perspective camera. Perspective cameras have depth according to their z-axis position. However, determining the zoom factor for both 2D and 3D is quite similar, just how you apply the value differs.

We've already determined that the X and Y coordinates of our camera needs to be (19.5, 10.5), as that's the relative center of all targets on the X and Y axes. What you need now is a vector that's perpendicular to the X and Y coordinates we calculated above. That's where the cross product formula comes in. The more astute reader may be screaming "you can't perform cross product on 2D vectors!" right now. Yes, you're absolutely correct, but bear with me.

The cross product of two vectors give us a vector that's perpendicular (at a right angle) to the two.
Source: Wikipedia


The diagram above shows the cross product of the red and blue vectors as the red vector changes direction, with the resulting perpendicular green vector. Notice how the magnitude of the green vector changes, getting longer and shorter based on the magnitude of the red vector. This is exactly what we need, a vector that's perpendicular to our camera's (X, Y) target position coordinates, whose magnitude changes appropriately based on the angle.

As mentioned before, you can't perform the cross product of 2D vectors. So instead, we'll pad our 2D vectors with a z coordinate of 0.

(19.5, 10.5, 0) x (0, 1, 0) = (0, 0, 19.5)

x is the symbol for cross product. We use a normalized up vector  as our second argument so that the resulting vector is of maximum distance. Using the Z value of 19.5, we can now set the zoom factor. Since orthographic cameras don't technically zoom in the same sense as a perspective camera, we instead change the orthographic size, which provides the same effect.

Now let's assume that the perspective camera of your 3D game needs to act very much like a 2D platformer (always facing the side, never directly above or below). Instead of altering the orthographic size (because that doesn't make sense for a perspective camera ;) ), we use the results of the cross product to set the z-axis directly. This will move the perspective camera accordingly, give us our desired zoom effect.

Here's a video demonstrating the camera movement for two players.