Thursday, March 27, 2014

Reducing Your Game's Scale: Save It For The Sequel!

It seemingly happens to almost every designer, novice or veteran, AAA and indie. You have that grand game idea. When you close your eyes, you can see it being played before you. The expansive levels, the fancy weapons, the beautiful graphics and audio, all of the little polished details. Then you snap back to reality and (hopefully) quickly realize the scale of your dream game is simply too large.

Time to start cutting!

This may seem terrible and disappointing, but just because you're cutting features and making compromises doesn't mean the initial vision is completely lost. Firstly, you're helping to ensure that you actually get to release your game. This is the ultimate goal, and everything should be done to ensure this (oh, but do make sure you provide a great experience!). Take the ideas you're cutting, write them down somewhere, as they are not lost forever.

Save them for the sequel!

Obviously, Overtime had a much grander scale when I first started designing compared to what it's been boiled down to now. Even after I write this, I'm sure more things will be cut. Being a solo developer working on his first actual commercial release, the scale needs to be small to ensure I actually release something. However, everything that I am cutting I do want to see come to fruition, so it's being saved for a sequel game, which I hope I get to work on given that the first title is received well enough to warrant it. 

So don't fret, don't fear! Cut, cut cut. You'll see an amazing thing happen, something that you probably couldn't imagine be happening. Your game becomes better for it, and you actually increase your chances of releasing. Not a bad trade off at all. 

Friday, March 21, 2014

The Importance of Player Feedback & Subgoals: Playtest Results - 03/14/2014

I had a quick playtest session with some close friends the other day. I've been making good progress on the game (which is yet to be titled, but lets refer to it by its working name, Overtime). I'd just added vertical axis shooting which I wanted to playtest and I also needed to get some real world QA done as testing a local multiplayer only game is proving difficult.

Here is a clip from one of the recordings I took.


Importance of Feedback

My biggest initial take away is the importance of player feedback for even the smallest actions. From jumping to obtaining a frag, there needs to be feedback provided. The player not only needs to be given feedback to assure his actions are executed, but to be rewarded for the things he does and make them worth doing again. Feedback can be provided in numerous ways, from elegant sprite animations, to subtle or acute particle effects. A small, brief dramatic sequence to a frag can make the frag all the more rewarding, thrilling and special as so awesomely done in Samurai Gunn.
A player swats back another players bullet for a kill in Samurai Gunn.
I quickly added some rudimentary blood splatter particles the day before the playtest to help provide feedback, but I feel it wasn't enough and lost its novelty very quickly. I've since tweaked the blood splatter to project based on the direction of the fatal projectile, tiles around the player corpse become bloody, added  flying gibs and even created dramatic John Woo like slow motion effect when the player is killed. All of these small layers of feedback will hopefully make obtaining player frags more rewarding beyond just increasing a frag score. Simply firing projectiles at other players may be fun at first, but if the overall presentation of the actions is bland and boring, player won't be interested in playing for long.

Fragging a demon with a shotgun in Overtime


Importance of Subgoals

Currently, Overtime has only one goal, kill all other players. There is very little else the player needs to focus on or worry about. This is a problem, as the game gets boring quickly. Once you've killed the other players a handful of times, you've experienced all that there is to offer and lose interest in playing any further.

It could be argued that platforming (successfully negotiating jumps to make your desired mark) and ammo management (collecting ammo packs to ensure you always have ammo) are also subgoals, but I feel they are too subtle. This just may be the nature of simple deathmatch mode in general; I do plan to add other game modes which will add more exciting subgoals for the player I'm sure.

Samurai Gunn has environmental hazards and destructibles. This give players more subgoals, avoid accidental deaths and shape your environment to your advantage (you can destroy certain tiles to the point where they become hazards). Players in Samurai Gunn can also engage in defensive actions, engaging in mini sword fights to parry player attacks and swatting back player bullets. This not only gives players a grander sense of control over their ultimate fate, but an entirely different set of actions and required skills.

This was a great round of playtesting and really highlighted serious gaps in Overtime's design, which I'll need to address. The above GIF of Samurai Gunn does such an incredible job of summing up the entire game, its mechanics, the level of polish and feedback, goals and dimensions in just under a second of gameplay. If you need longer than a second to capture the total essence of your game, you should step back and start rethinking your design.

Friday, March 7, 2014

Addendum: 2D Platformer Collision Detection in Unity

NOTE: Please see Yet Another Addendum to this solution for important bug fixes. 


This is an addendum to my original post, 2D Platformer Collision Detection in Unity. The solution explained in that post was a great "start", but ultimately had problems which I'd like to go over and correct in this post, so that I don't lead anyone too astray!

The Problem

To summarize, entities could fall through collision layers under the right conditions. The most notable use case was corner collisions at low framerates (< 30FPS). 


In the picture above, the white lines represent the raycasts. The green box is the box collider. As you can see, that small corner gap is enough to cause the player to slip through the collision layer. As noted before, this issue was highly prevalent when the frame rate was < 30FPS (I capped the frame rate using a small script to take into consideration people with slower hardware). For the more astute Unity users, I can hear the screams of "FixedUpdate! FixedUpdate!" Just trust me that using FixedUpdate had no effect on this issue. I tried, in several different manners, and all resulted in the same problem.

The Solution

Originally, the rays were being casted from the edges of the box colliders instead from within the box collider. Why is this important? I'm going to borrow a diagram from Yoann's Pignole Gamasutra article.

Source: Gamasutra: The hobbyist coder #1: 2D platformer controller by Yoann Pignole
This makes sense, but why would I only have an issue at slow frame rates? To be honest, I still don't know. I'm guessing it had to do with the speed in which raycasts were executed (which you would think calling within FixedUpdate would fix, but didn't). 

Ultimately, I rewrote the system completely, ensuring the rays were casted from the center of the box collider, and when resolving the collisions, moving the entity to the point of collision. This solution completely removed all issues I noticed. 

This provides a much cleaner, robust solution. I can also more easily adjust the number of rays I cast, as well as allowing to cast the rays outside of a margin, if need be.















Tuesday, January 14, 2014

Single Camera System For Four Players

One design decision of Overtime I've been currently facing is "how big should the game maps be (single screen or multi-screen)?" Small maps allow for focused, more intense battles while limiting game mode possibilities, while larger maps allow for more gameplay and game mode varieties. Small, single screen maps would require only one camera to capture all players and playing environment. Larger, multi-screen maps would require multiple cameras (one for each player) that can follow a target. Since Overtime is a local-multiplayer game only, this also means split-screen cameras.

Why not have both? It's pretty trivial to have maps that anchor the camera to a single spot for small maps, while allowing split-screen cameras for larger maps. After some playtesting, I found the split-screen cameras pretty annoying due to the small screen real estate they provided for each player. I didn't want to scrap the idea of big maps entirely. So, is it possible to create a single camera that can follow up to four different targets? I soon realized that the type of camera I ended up needing is a camera in the style of a fighting game.

Fighting games, such as Super Smash Bros. or even wrestling games, feature single screen cameras that track multiple targets, zooming in and out as the targets get closer and farther away from each other, respectively. This is done by some vector math magic (it's not really magic, as you'll see).

So let's go through the requirements of the camera system
  1. Follow up to four targets, always having them within screen view at all times.
  2. The camera should always be focused on the relative center of all four targets.
  3. As targets move farther away from each other, zoom camera out an appropriate amount of distance.
  4. As targets move closer to each other, zoom camera in, clamping the zoom factor to a specified amount.
From that, we can immediately deduce we need to know the following things
  1. Based on all targets current positions, what are the minimum and maximum positions.
  2. What is the center point between the minimum and maximum positions.
  3. How far do we need to zoom to keep all targets within view.
So far, maybe you've realized that it's impossible for a camera to follow multiple targets; the camera must always be fixed on one point to follow. Obtaining the minimum and maximum positions will allow us to later find the center point between these positions, which we will use as the single point the camera will follow. By following this point, we know the camera will be relatively center of all targets. Here is the pseudocode for obtaining the minimum and maximum positions:

List xPositions;
List yPositions;
foreach target {
    xPositions.Add(target.x);
    yPositions.Add(target.y);
}
maxX = Max(xPositions);
maxY = Max(yPositions);
minX = Min(xPositions);
minY = Min(yPositions);

minPosition = Vector2(minX, minY)
maxPosition = Vector2(minX, minY)

We need to obtain the x and y coordinates of all targets, and store them in a List. We then find the maximum (x, y) and minimum (x, y) values of each to give us our final minimum and maximum positions (Unity has these Max and Min methods in the Mathf class, but you could implement your own easily if needed).

Let's add some diagrams to help visualize this better (the scale is all wrong, I know, but bare with me!).


The smiley faces represent our three players, and their positions. Following the above pseudocode, we come up with a min of (8, 7) and a max of (31, 14). This gives us the outermost coordinates of the area our players are in.

To find the center of these two positions is a trivial step. Simply add the Min and Max vectors, and multiply by 0.5 (favor multiplication over division for performance reasons).


((8, 7) + (31, 14)) * 0.5 = (19.5, 10.5)

Great! We now have the target position that our camera will use to follow. This position will update as our players move, ensuring we're always at the relative center of them. But we're not done just yet. We need to determine the zoom factor.

Quick side note about the zoom factor. When developing a 2D game, you normally use 2D vectors (as we've been doing so far) and an orthographic camera, which ignores the z-axis (in Unity, not really, but the depth is used differently as objects don't change size as the z-axis changes). If you were developing a 3D game, you'd be using 3D Vectors and a perspective camera. Perspective cameras have depth according to their z-axis position. However, determining the zoom factor for both 2D and 3D is quite similar, just how you apply the value differs.

We've already determined that the X and Y coordinates of our camera needs to be (19.5, 10.5), as that's the relative center of all targets on the X and Y axes. What you need now is a vector that's perpendicular to the X and Y coordinates we calculated above. That's where the cross product formula comes in. The more astute reader may be screaming "you can't perform cross product on 2D vectors!" right now. Yes, you're absolutely correct, but bear with me.

The cross product of two vectors give us a vector that's perpendicular (at a right angle) to the two.
Source: Wikipedia


The diagram above shows the cross product of the red and blue vectors as the red vector changes direction, with the resulting perpendicular green vector. Notice how the magnitude of the green vector changes, getting longer and shorter based on the magnitude of the red vector. This is exactly what we need, a vector that's perpendicular to our camera's (X, Y) target position coordinates, whose magnitude changes appropriately based on the angle.

As mentioned before, you can't perform the cross product of 2D vectors. So instead, we'll pad our 2D vectors with a z coordinate of 0.

(19.5, 10.5, 0) x (0, 1, 0) = (0, 0, 19.5)

x is the symbol for cross product. We use a normalized up vector  as our second argument so that the resulting vector is of maximum distance. Using the Z value of 19.5, we can now set the zoom factor. Since orthographic cameras don't technically zoom in the same sense as a perspective camera, we instead change the orthographic size, which provides the same effect.

Now let's assume that the perspective camera of your 3D game needs to act very much like a 2D platformer (always facing the side, never directly above or below). Instead of altering the orthographic size (because that doesn't make sense for a perspective camera ;) ), we use the results of the cross product to set the z-axis directly. This will move the perspective camera accordingly, give us our desired zoom effect.

Here's a video demonstrating the camera movement for two players.


Wednesday, December 18, 2013

Model-View-Presenter architecture for game development, it's not just for enterprise.

I'm an absolute stickler for a good architecture. Nothing irritates me more then spaghetti code with deep coupling between classes, god classes and a lack of any type of modularity. When I begin to develop any application of any type, I immediately start to think how it's going to be structured. Thankfully, there are tons of battle proven patterns to choose from the family of MV*. My favorite as of late has been Model-View-Presenter (MVP).

I don't want to go too deeply about what MVP is and how it differs from other MV* patterns (such as the classic MVC), but here's a quick diagram stolen from Wikipedia.


The key takeaways are that views are dumb and easily interchangeable, presenters contain the business logic, a presenter updates ideally one (but can update more) view, application state lives in the model objects (which are as equally dumb as views). That's all you should really need to know to follow the rest of this post, but please read up on MVP more if you're not familiar with it and understand the difference between MVC.

When I first began game development, I had a hard time structuring my code. I initially couldn't grok how to apply all of the golden rules of regular GUI development to games. Recently, it started to click. You can apply a MV* pattern to games, very easily in fact, and create a clean code base that's organized, maintainable and can be easily changed (we all know how volatile a game's design and feature set can be!). So lets talk about how an MVP pattern can be applied to a Unity code base.

I'm not going to provide any specific code in this post (there's good reason why as you'll see later). This is strictly theory.

Let's say we have a player prefab. Normally, you may just write a bunch of specific scripts to do one thing and one thing only (hopefully) and attach each different script to the prefab. While this does work, I find it chaotic, especially when other scripts need to start talking to each other or one script needs to have its behavior changed slightly for one specific type of prefab. To do things the MVP way instead, we're going to apply a two scripts to the prefab called PlayerView and PlayerPresenter.

PlayerView will represent the View portion of MVP (well, duh!). PlayerView will contain zero game logic. It will strictly be responsible for handling the visual representation of the player, accepting input to pass along to the presenter, and exposing important properties that you may want to have adjustable in the Inspector view of the Unity editor, like health, walking speed, etc. PlayerView will listen for input from the player and pass along the input to the view's backing presenter via events and passing the presenter model objects with the necessary data.

PlayerPresenter will represent, can you guess it, the Presenter portion. Now earlier I said presenters contain the game logic, and in a lot of cases this is true, however I'm going to throw another pattern at you (I'M GOING DESIGN PATTERN CRAZY). Instead of putting all of the game logic for the player in PlayerPresenter, we're going to make use of the command pattern, or a variation of it. PlayerPresenter will be responsible for creating the necessary model objects (based on data from PlayerView) and sending those model objects to Task objects, which handle the actual game logic.

Model objects are very dumb. They simply encapsulate data to pass around. A bunch of properties, nothing more.

Task objects live to do one thing, and one thing very well. They accept model objects from the presenters, do a bunch of work, calculate player score or create a projectile object to spawn, for example, and if necessary sends the results of the work back to the presenter to update the view with. This creates extremely modular, reusable game logic that can be accessed from any presenter that calls it. This allows us to create flat class hierarchies as well, which is a great thing. We could let the presenters handle the game logic and perform the actual work, and in some cases you may, but that game logic isn't easily shared elsewhere and you risk either creating deep class hierarchies to share the logic, or repeating code.

So let's step back and see how a real example would play out. Let's go through an example of a player pressing the shoot button to fire a rocket from his rocket launcher.
  1. PlayerView receives a shoot input signal, notifies PlayerPresenter
  2. PlayerPresenter receives notification of the input, creates a SpawnProjectileModel model object of current player position, direction and weapon type (rocket launcher for this example) to send to the SpawnProjectileTask.
  3. SpawnProjectileTask receives the model object sent from PlayerPresenter, and spawns a new rocket launcher prefab with the data provided via the SpawnProjectileModel model object. 
  4. PlayerPresenter receives notification from SpawnProjectileTask that the rocket spawned successfully and notifies PlayerView.
  5. PlayerView updates its AmmoCount property to deduct one, which updates the ammo count graphics. 
  6. Done!
This may seem like a lot of steps and indirection to deal with to just fire a rocket, but you can easily change the games look and behavior without a ripple effect. Changing the player from a bad-ass marine to a human-hating robot requires you to only change the View class. Enemies can call the same SpawnProjectileTask as the player does and if the game logic should ever need to change, simply update SpawnProjectileTask and both Enemy and Player pick it up without having to touch either. 

Now the reason I didn't provide any actual code and stuck purely to theory is because there is a fantastic framework for Unity to do everything I described so far (plus more), StrangeIoC, which has excellent code samples and diagrams in the documentation and I felt it does better justice. StrangeIoC is an inversion of control, MVP framework. It's fairly new, but I'm using it for Overtime and I don't think I can code a Unity game without it. It's continually evolving and if anything I've talked about in this post is jiving with you, I highly suggest you give StrangeIOC serious consideration. Hopefully, I've convinced you to start considering a MV* type architecture for your next game. 


Tuesday, December 17, 2013

2D Platformer Collision Detection in Unity

NOTE: Please see my addendum regarding this solution, which is ultimately flawed as presented in this post.

Unity is a 3D engine that comes with built-in physics engines (PhysX for 3D, Box2D for 2D). However, if you're aiming to develop a 2D platformer, you'll quickly find that it's extremely difficult, I'll go as far as to say impossible, to achieve that "platformer feel" using these physics engines. For your main entities, you're going to have to roll a variation of your own.

Furthermore, if you attempt to use the supplied character controller package for your player in a 2D platformer, you'll also quickly discover that the collision detection and overall controls just don't feel right, no matter how hard you tweak it. This is primarily due to that the character controller package uses a capsule collider, which makes pixel perfect collision detection on edged surfaces problematic. So once again, you need to roll your own controller and collision detection system.

Since Unity is a 3D engine and regardless of the type of game you're developing (3D or 2D), you're game is being developed in a 3D space. You probably can achieve some canonical tile-based solutions for collision detection (perform check-ahead on the tile the player is heading into, and determine appropriate collision to take, if any, or example), but it's best not to wrestle against the engine. The best solution I've found is to use ray casting.

Ray casting seems to refer to different things (see Wikipedia), but the ray casting I'm referring to is the method of casting rays from an origin towards a direction and determine what intersects the ray, if anything. We can use this method to handle collision detection, casting rays from our player in both the x and y axes to learn about the environment surrounding the player and resolve any collisions.

The basic steps of the algorithm is as follows:
  1. Determine current player direction and movement
  2. For each axis, cast multiple outward rays
  3. For each ray cast hit, alter movement values on axis
I want to note that a lot of the following code is originally based off of the fantastic platformer game tutorial found on the Unity forums. However, I've heavily modified it to fix some bugs, mainly with corner collisions, which we'll go into depth.

Here's the entire class that performs the ray casts for collision detection.

It's important to note that this class is called inside of a separate entity controller (BasicEntityController) that handles calculating acceleration and creating the initial movement Vector3 object. BasicEntityCollision takes the movement and position Vector3 objects and adjusts them based on any possible collision detected from the ray casts.

The Init method does some one time initialization of required fields, such as setting reference to the controlling entities BoxCollider, setting the collision LayerMask, etc.

The Move method accepts two Vector3 objects and a float. moveAmount is, as the name implies, the amount to move before collision detection, as calculated by BasicEntityController. position is the current entity position in the game world. dirX is the current direction the entity is facing.

Move will determine the final x (deltaX) and y (deltaY) values to apply to moveAmount after all collision detection. Move starts the ray casting along the y-axis of the entity, followed by the x-axis of the entity, but only if they are moving left or right; we won't cast x-axis rays when the entity is idle. We then set the finalTransform based on deltaX & deltaY and return it so that the entity can finally use it to Translate!

Let's dive into the two key ray casting methods, yAxisCollisions & xAxisCollisions. First, note that both methods perform at least three different ray casts along the entities BoxCollider, on each axis. This allows us to get complete coverage for the entity.

Each line represents a ray cast.

yAxisCollisions starts by determining which direction the entity is currently heading along the y-axis (up or down), and calculates separate x and y values to be used to create the Ray objects to be casted along the box collider (from left to right, top or bottom). yAxisCollisions calls two different for loops based on which way the entity is currently facing. If we are facing towards the right, it'll start the ray casts on the right side of the entity, else it'll start on the left side of the entity. This was done to prevent a bug that saw the entity falling through the collision layer when moving to the right and downward due to a gap that was being created (because we break the for loop after the first ray hit we encounter) when the entity would collide with the corner of a tile.

When Physics.Raycast returns true, that means a ray cast has hit. We obtain the distance of the hit from the ray origin, and calculate a new deltaY to apply to the final move transform. We pad this value slightly to prevent the entity from falling through the collision layer accidentally.

We pad the deltaY value slightly to keep the entity above the collision layer, to avoid accidental fall throughs.

With our deltaY calculated, we move on to the x-axis (again, only if the entity is actually moving along the x-axis). xAxisCollisions is very similar to yAxisCollisions, but simpler. We don't worry about which direction the entity is facing, instead we worry whether the entity is either moving on the ground or currently in mid air. If we are moving mid air, there's a high risk of landing on the corner of a tile, which we could fall through. To help prevent that, we cast a larger range of ray casts along the x-axis (4 instead of 3) with the outer 2 rays that are casted slightly outside of the box collider width. When a hit is detected on the x-axis, we simply set our deltaX to 0 and return it.

When moving through the air, we cast a wider range of rays slightly outside of the entity's boxCollider width.


And that's the magic behind using ray casting to perform collision detection. The finalTransform will be sent to the entity to be used with the Translate method. Here's a small video clip showing the Debug.DrawRay calls. When a ray is hit, it's colored yellow.


For further reading on the topic of a 2D platformer controller for Unity, there's an excellent blog post on Gamasutra by Yoann Pignole that goes into great detail.

You can also see a more complete implementation of the collision detection code at the following Gist.