CSCI 8980 Real-time Game Engine Technology

Professor Stephen J. Guy · University of Minnesota, Twin Cities · Fall 2019

Course Description

The technology driving modern video games has brought forth new breakthroughs in real-time graphics, simulation, and artificial intelligence (AI), and has become increasingly common in wide range of entertainment, robotics, and virtual/augmented reality applications. In this course, we will cover the key algorithmic techniques that have enabled this revolution, investigate the trade-offs and limitations inherent in these real-time approaches, and discuss open problems and current trends in the field. Topics discussed will include real-time techniques for: physically-based lighting simulation, artificial intelligence (AI), sound simulation, character animation, and large-scale rendering.

Course Work

Students will participate in a combination of coding projects, student presentations, and class discussions. Coding projects will involve students writing their own real-time graphics, AI, and simulation code and integrate it into a broader framework. The course will take place in a computer lab, and substantial class time will be devoted to in-class programming. Classes are held in a computer lab, and course work will involve substantial in-class programming time with hands-on exploration of the course material. Additionally, students will be expected to present their projects to others in the class, and to provide substantial feedback and discussion of peer work.

Who should enroll

Students interested in real-time methods for computer simulation, interactive graphics, game programming, and AI.

Prerequisites

Experience in C++, data structures, using large code bases, and basic vector calculus (e.g., dot-products and gradients) is assumed. Previous experience in graphics programming helpful, but not strictly required.


Balloon Pop Game

HW1 - Game from Scratch

September 2019
Homework 1 example

Figure 1: A typical Level 1 screen.

Gameplay / Game Design

In this simple game, built in Processing, the player moves the green slider left and right to pop as many colorful balloons as possible and gain the most points. One point is gained for every good colorful balloon, while one point is lost for every evil black balloon that is popped. This score is presented along with the current level in the bottom left corner, as shown in Figure 1 .

As you gain points, you move onto more difficult levels. Currently, the levels increase one notch for every 10 points earned. Successive levels increase three of the game parameters: maximum balloon spawning speed, slider speed, and the chance a newly spawned balloon will be an evil balloon. Therefore, for every new level you face, not only are the balloons harder to pop and more likely to be evil, the slider is able to travel more quickly. This is the only power-up currently implemented. These values are adjusted as shown below ( pop.pde lines 100-104 ).

// check score and update level
level = score / 10 + 1;
MAX_SPEED = 150 + level * 10;
brick.speed = 5 + level;
ENEMY_SPAWN_CHANCE = 0.05 + level * 0.05;

Snippet 1: Level-up parameter adjustments.

There were quite a few features I had in mind while developing this game. I added to them as I went, but from the beginning, I hashed out two feature lists: basic features and more farfetched features. These I placed into my Trello board, as shown in Figure 2 and Figure 3 .

Homework 1 basic features

Figure 2: My checklist of basic features.

Homework 1 next features

Figure 3: My checklist of more complex features.

The biggest feature I tried to implement for this assignment was the use of a spatial data structure for collision testing (my first task within my "Next Steps" features list above). Despite the goal being to improve performance, after much thought and much work on implementation, I decided that the gameplay would not actually improve. Although I still wish to pursue this feature, other features that are listed above would've more easily accomplished the task of boosting gameplay, such as adding some sort of missiles or more complicated balloon movement.

Homework 1 sparkles gif

Figure 4: The sparkling gif used when good balloons explode. The feature I had the most fun implementing.

Algorithmic Discussion

In developing this game, I allowed my strong Object-Oriented Programming (OOP) tendencies to lead my algorithmic and technical decisions. The strongest example of this is in my (ideally abstract) Object class, shown below ( object.pde ). This is a stubby container class. There is no functionality and it likely hurts the overall runtime of the game. However, to myself and other OOP developers, the boost in understanding and organization seems very much worth it when handling the code. With containers such as Object and its children, Balloon and Slider , larger code bases can be more readable and easier to digest.

class Object {
 // member variables
 PVector position;
 PVector rgb = new PVector(0, 102, 50);
 float speed = 75; // pixels/second
 int lastSpawn = 0;

 Object(PVector pos, int t) {
   position = pos;
   lastSpawn = t;
 }

 void display() {}
}

Snippet 2: Object class.

A computational bottleneck within my program is my collision testing. Since I am very much aware of the existence of this bottleneck in a majority of graphics programs, I wanted to put some time into building a spatial data structure, as mentioned above. My first approach, and the one which is currently still being used, is a brute force approach. My game loop checks during every frame for collisions between the slider and each balloon, shown in the code snippet below ( pop.pde lines 57-70 ).

// if this balloon collides with the slider
if (b.checkCollision(brick)) {
  if (!b.evil) {
      animations.add(new Animation(b.position.copy(), popFrames));
      score++;
  }
  else {
      animations.add(new Animation(b.position.copy(), hotFrames));
      score--;
  }

  balloons.remove(i);
  continue;
}

Snippet 3: Balloon reaction to collision checking.

After much thought and spending a good amount of time not only considering, but also attempting to integrate a spatial data structure, I decided on a gridding approach which would essentially mark which balloons overlapped with which bins. After much work, I was able to accomplish this much. I created a Grid which knew about which balloons overlapped with which GridCells. However, the more I thought about the complexity, the more I realized that there would not be a performance benefit over my brute force approach. Both methods required iterating over all of the balloons and for the grid approach, there would be extra computation once the grid was built. Because of time constraints and other responsibilities, I chose not to continue working with this gridding approach and kept my brute force calculations.

If given more time, instead of using ArrayLists that shrink and grow forever, I would instead use fixed length arrays. ArrayLists, although fun, are not the most efficient because of their conveniently modifiable length. If a max number of balloons was decided upon and I made sure that popped and lost balloons were respawned at the bottom of the screen, I believe there would have been a greater computational improvement than using a fully implemented gridding system.

Game Engine Analysis

Although Processing does a good job of being an accessible "gateway" into graphics, an aspect that would've saved me time from the beginning would be something akin to my Object class. Having a pre-built container which knew a bit about how it could be displayed/rendered would've expedited my first few tasks for this assignment, which was build and design my Object and Balloon classes.

Another, more advanced addition, which would have nullified my earlier discussions about collision testing, would be some sort of pre-built collider which understood when it overlapped with other objects. Either this collider itself would already contain collision checking code or another object, a collider manager, would contain the collision algorithms. The collider manager could then be the one that checks to see if colliders are overlapping. Since games all require the ability to know if two things are touching (be it if your mouse click landed in the right place or if your character's arrow was able to hit the enemy), a built-in collision system would be incredibly useful for developing games.

In my game specifically, the state is stored within the global variables I declared within my config.pde , as shown below. How these variables interact within my game loop are the core of how this game functions. My game loop can be broken down as follows ( pop.pde lines 23-105 ):

  1. Determine if I should be displaying a level up splash screen.
  2. Determine if I should be spawning new balloons.
  3. Loop through every balloon in the world.
    • Move it. Remove it if it's off the screen. Check if it has collided with the slider. Draw it.
  4. Loop through every animation.
    • Remove it if it's done. Draw it.
  5. Draw the slider and stats.
  6. Check for a level update and adjust accordingly.
// Colors
int BG_COLOR = 100;
int[][] COLORS = {{255,0,0},{102,102,255},{0, 153, 153},{255,102,178},{255,255,0},{102,0,102},{153,0,76}};

// Balloons
int MIN_BALLOONS = 1;
int MAX_SPEED = 150;
int SPAWN_RATE = 700;
int lastSpawn = 0;
ArrayList<Balloon> balloons = new ArrayList<Balloon>();
PImage[] popFrames;
ArrayList<Animation> animations = new ArrayList<Animation>();

// Evil balloons
Balloon[] enemies = new Balloon[5];
float ENEMY_SPAWN_CHANCE = 0.05;
PImage[] hotFrames;

// Spatial Data Struture(s)
//Node root = new Node(width/2.0);
Grid grid;
int GRIDCELL_WIDTH = 100;
int GRIDCELL_HEIGHT = 100;

// Player
Slider brick;
int brickWidth = 50;
int brickHeight = 10;
int score = 0;
int level = 1;
boolean levelUpScreen = false;
int splashStart = 0;

Snippet 4: Game loop.

During the entire game, and therefore during this game loop, Processing is checking for key press events. These asynchronously interrupt the game to enter the following function within action.pde , lines 5-10. This asynchronous interruption triggers the slider to check to see if either the left or right arrow keys have been pressed so that the slider's position can be adjusted accordingly.

// handle keyboard presses
void keyPressed() {
  if (key == CODED) {
    brick.move(keyCode);
  }
}

Snippet 5: Key press handler.

The two big "non-moving pieces" are my balloons and animations ArrayLists. Quite simply, my balloons list contained all of my balloon objects. My animations list, however, is more nuanced and was necessary for my last feature: incorporating animations (i.e. displaying gifs). I looked to the Processing Tutorial page Animated Sprite for code and inspiration.

I started by breaking my gifs into individual frames and storing them globally (popFrames and hotFrames above). When an event occurred which would warrant an animation (the slider colliding with a balloon) a new Animation object would be created at that location and told which frames it needed to display. Therefore, all I needed to do was keep track of which animations were still active so that I could draw those frames to the screen. Once all of the frames of that specific gif had been displayed, I then deleted that Animation object. My animations array is therefore almost identical to my balloons array: both contained the active Balloons and Animations so that they could be iterated through and properly rendered.

Preview Video
Screenshots

Baby Game Engine

HW2 - Rendering Large Scenes

October 2019
Preview Video

Unity Exploration

Final Project

December 2019
Roll-a-Ball Tutorial
Roll a Ball Tutorial header

Figure 1: Roll-a-Ball Tutorial page on Unity Learn.

To get things rolling (pun intended), I started off by walking through the classic Roll-a-Ball Unity tutorial. In many aspects, this tutorial continues to be a great introduction to not only Unity as a whole, but also to the scripting-side of Unity. As will be discussed in the following section, Unity is highly supportive of those who do not wish to write a single line of code. With this tutorial, however, you can get a good feel for both the non-programming side and the C# side.

To start things off, I had to decided which version of Unity I wished to use. I knew going into this project that dealing with versions would be an issue, but I had no idea just how much of an issue that would end up being. A sizeable portion of the hours spent on this project have been juggling packages and trying to learn how 2019.3 has decided to do things differently. Luckily, Roll-a-Ball is simple enough that there wasn't much difficulty here since no special packages are need. Yet another great perk of the tutorial.

My partially completed implementation of the game can be found in this project's GitHub repository here .

Shader Graph

The real driving force behind deciding to use Unity for this last project was getting to play with Shader Graph. For the previous assignment, we were tasked with building and using shaders and some of my classmates decided to use Shader Graph. I was very intrigued by their descriptions of it being a means of seeing the results of your shader manipulations as you develop your shaders.

This is the point at which I found the Brackeys Youtube Channel. I landed upon their "Basics of Shader Graph - Unity Tutorial" video and after watching it, began following the steps that were laid out. However, this is when I ran into my first issues with the Unity Render Pipelines. As noted in the video, to use Shader Graph, you need the Lightweight Render Pipeline installed. This turned out to be a headache, especially when I found out that in order to get bloom, I needed to set up some Post Processing items. This was very frustrating, especially since I am using the latest version of Unity 2019, which is recent enough to warrant not much relevant online help.

With the Lightweight Render Pipeline somewhat set up correctly, I was able to follow Brackeys' Basic Shader Graph tutorial as well as his dissolve tutorial. I also begin following his force field tutorial, but stopped part way because I realized that I needed to set up the High Definition Render Pipeline, since that was the tutorial setup.

Demo gif of my three shaders

Figure 2: From left to right, the results of building my own dissolve, glowing, and force field shaders following Brackeys' dissolve, basic, and force field shader tutorials. The force field shader is incomplete.

Using Shader Graph was quite nice. The visual format, with its nodes and connections, can be very straight forward. The biggest drawback I had was knowing what node that I wanted or needed. Since I was following tutorials, that wasn't a very great struggle, but I could imagine it being more than just a small hindrance when trying to make your own custom shader. Once you gain a good grasp of the available nodes, however, I feel as though this could be an even more helpful tool.

The visual aspect is very helpful for a higher level understanding; for seeing the flow of data and how the pieces contribute towards the final effect. However, gaining a lower level understanding can be much more difficult because of the abstractions the nodes represent. When trying to follow along with the force field tutorial, I quickly got lost with the calculation needed to "detect" where the force field intersected with other objects in the scene. This was done as shown below in Figure 3.

Screenshot of Brackeys' Force Field in Unity - Shader Graph tutorial

Figure 3: Snippet from Brackeys' FORCE FIELD in Unity - SHADER GRAPH tutorial video showing the necessary subractions to shade the object in relation to its distance to other objects in the scene.

One big area of confusion was what the Screen Position and Scene Depth nodes really represented. The online documentation on the nodes was helpful, but those extra steps are necessary if you wish to understand the underlying math. In order to create a shader from scratch, you need to understand at least some of the underlying math. What comes to mind is the fog shader we discussed in class. I say "at least some" since there are often a number of hacks or simplications that can remove the more complicated steps, as we did in class. However, if none of the math was understood at all, it wouldn't be possible to create your own fog shader. This is not quite the case with Shader Graph which is both a plus and a minus.

Screen recording of Shader Graph for my grass sway shader

Figure 4: Capture of the Shader Graph editor for my complete grass sway shader. This is the result of following Brackeys' grass sway tutorial video.

After working on the more simple shaders, I then tackled the grass sway shader. This tutorial excited me because of its use of vertex displacement and its overall complexity. This was a case where I didno't struggle to comprehend what was happening in Shader Graph once I gave it some thought. Going into it with an understanding of vertex displacement made it easy to understand which nodes were necessay and displayed a definite strength of Shader Graph.

Procedural Mesh Generation

After playing around with Shader Graph for a while, I saw another Brackeys tutorial about Mesh Generation and thought it sounded fun. After mucking around with a visual, node-based system for a while, it was refreshing to create some C# scripts and write some code. The tutorial walked through using Perlin Noise to generate pseudo random noise and easy create a terrain with somewhat realistic changes in height. With Unity's extensive libraries, it was as simple as making a single function call, shown below in Snippet 1. Unlike the tutorial, however, I made a new "noisiness" parameter which is available to the user. This parameter is what I am changing to get the effects shown below in Figure 4.

// loop through and generate vertex positions
for (int i = 0, z = 0; z <= zSize; z++)
{
    for (int x = 0; x <= xSize; x++)
    {
        // adjust noise so it's more noticeable
        float height = Mathf.PerlinNoise(x * 0.3f, z * 0.3f) * noisiness;
        vertices[i] = new Vector3(x, height, z);
        i++;

        // determine min and max heights
        if (height < minHeight) minHeight = height;
        if (height > maxHeight) maxHeight = height;
    }
}

Snippet 1: Calculating new mesh vertices using Perlin Noise.

Screen recording of adjusting the noisiness parameter of my mesh

Figure 4: Effects of adjusting the "noisiness" parameter to change the height values generated using Perlin Noise, as shown in Snippet 1 above.

Putting it all Together and Future Work

After playing around with tutorials, I had a lot of trouble trying to see how I could combine everything into one cohesive project. After completing the grass sway tutorial, I had the thought to procedurally generated mesh and also place swaying grass along it, depending on the height of the mesh. Since the beginning of this project, I thought I had wanted to create a 2D game with essentially a camera fixed at a certain offset from the player. I thought maybe I could generate mesh as the player moves and add swaying grass for them to walk through, all while the player views the scene from a 2D perspective.

I set out to work on this and, since the grass swaying needed to be implemented using the High Definition Render Pipeline, thought it only natural to continue to use it for this somewhat cohesive "game." However, importing my mesh generation from a project using the Lightweight Render Pipeline, did not work immediately. My vertices were being generated where I wanted them to be, but I had no triangles displaying. I knew my vertices were calculated correctly because of this very helpful debugging function Brackeys had implemented in their "PROCEDURAL TERRAIN in Unity! - Mesh Generation" video, shown below in Snippet 2.

private void OnDrawGizmos()
{
    // only draw if we have vertices
    if (vertices == null)
    {
        return;
    }

    for (int i = 0; i < vertices.Length; i++)
    {
        Gizmos.DrawSphere(vertices[i], 0.1f);
    }
}

Snippet 2: Using OnDrawGizmos to place spheres at each of the generated vertices of the mesh.

If my vertices were there, and I was setting the triangles of the mesh just as I did previously, why wasn't I seeing any triangles? After going through some more of the 2019.3 documentation on Mesh Filters, it turned out that for 2019.3, I also needed a Mesh Renderer in order to see the triangles of my mesh. But a Mesh Renderer required a material and therefore didn't allow me to apply a gradient to the vertices as I had done previously, following Brackeys' "MESH COLOR in Unity - Terrain Generation" tutorial. The simple blue to green to white gradient can be seen above in Figure 4. This gradient was simply applied by linearly interpolating between the heights of the vertices to determine where along the gradient that specific vertex fell.

After abandoning the hope of coloring my generated mesh, my next steps were to place a player sphere that I could control and also to place grass on the mesh. I simply copied my player controller script from the Roll-a-Ball tutorial and now I had a player. But the grass bit seemed more difficult. I started with a somewhat silly idea of dropping the grass onto the terrain randomly to see how that would turn out. However, to allow physics-based behaviors, a Unity GameObject needs a rigid body component. Once I got the grass dropping at random locations, I saw that this was not at all the behavior that I wanted. They would tip over and roll about sometimes, but worst of all, they interacted with the player sphere, which had its own rigid body. The player wasn't allowed to go through the grass anymore because they were both physical rigid bodies and, as the name suggests, rigid bodies cannot pass through each other.

Because of the amount of time I had left, and because of the need to complete this writeup, I decided to stop developing at this point, leaving much room for future work. A few items I wish to continue working on are as follows:

  • Allow player to move through the grass clumps
    • either just passing through or applying its own vertex displacement to move the grass out of the way)
  • Generate more mesh when the play reaches the edge of the current mesh it is on
    • I imagined a MeshManager of some kind that would use the vertices at the end of the current mesh when creating a new one, in order to seamlessly join the two
  • Color the mesh
    • as explained above, because of the need of the Mesh Renderer to have a material to apply to the triangles, I was not able to color the vertices- there are possibly ways to do this by texturing the mesh with a custom texture
  • Better player movement
    • currently, the movement is through the player being pushed by forces, but this movement isn't very conducive to a 2D game with the player rolling up and down hills because it's difficult to control precisely
Verdict on Using Unity

Package juggling and juggling versions was a pain. With such a popular, constantly developed engine like Unity, there are so many updates being pushed out that it can be very tempting to stick with an older version. This is a case where I feel as though I should not have jumped into 2019.3, although, I'm sure I would've had a whole new set of issues had I used a 2018 version. These issues are one major difference between using your own custom engine and a large, publicly deployed game engine like Unity. A custom engine is very unlikely to have such a high setup cost because of its size and the fact that it likely isn't as complex. Something I found very surprising about some of the errors I was receiving was missing functions or references to types that don't exist. I understand that Unity is expanding rapidly, but I was surprised at its lack of backward compatibility. However, once I realized that it must be that I had out of date packages, and I had updated them all, then things were fine.

Overall, it was quite a pain to use Unity. It tries to be able to do so much, and it can do so much, but that is also a major downside. I honestly felt as though half of my hours were spent banging my head against versions and wondering why things where black or not showing up even though I had followed the tutorial. I will admit that Shader Graph was very fun and impressive, however somewhat slow on my laptop at times. I hadn't looked to see if this was possible, but I would very much like to see the shaders Shader Graph generated. It did not seem readily apparant that these were available, but I would hope that they are so that those of use that like to dig a little bit deeper can see the outcomes of playing with Shader Graph.

Tutorials