When i started this project, it was just a means to follow along with a tutorial. By the time the tutorial was done, I realized that C++ “The Language” was no longer a blocker to me like it had been in the past. I felt the raw power. I felt the potential.
I felt motivated. I wanted.. more, nay, I craved more. And hey look? I even have this fancy raycasting engine right in front of me as a leap off point. Let’s do something with that.
Commit 1: Initial Commit
Right from this moment, I felt something special at my fingertips, but I was all kinds of overwhelmed and could already feel my mind wandering to easier to digest things: like yet another youtube binge of commentary on random subjects.
As a result, I turned to an old trick to keep myself on-task: I live streamed my progression. I didn’t really know what I was doing, but I was willing to learn, and willing to learn on camera. This would up being supremely beneficial in ways I couldn’t have even expected. The twitch dev community is very patient, very generous, and uniquely very helpful. This community help get me through some tough early roadblocks in real time. Now that was power.
Commit 50: this isn’t going very well..
Around this time, I felt powerless to continue making improvements given the architecture of the initial creation. It was kind of a mishmash of concepts loosly put together in a way that somehow worked. It was time to take a step back, and re-evaluate what I’d learned, and how I can apply it in a way that could improve the DX (developer experience) and smooth feature creation.
First thing was first: Ditch FreeGLUT and replace it with SFML.
freeglut is a library one uses to quickly get an opengl window up and running and bind with some basic concepts like input with a callback handle. It does the job, but I found myself struggling to do much with it that didn’t outright break it in some way, and there were several bugs that required very weird hacks just to deal with it. GLEW was included in this package to help handle a few extra things that were out of GLUT’s scope.
SFML is a bigger library that consists of several useful components. The primary of interest to me was it’s included event system, window creation as an instantiated object, and input events. I wanted to make this transition now, rather than later, for the bigger my project got, the more difficult and time consuming it’d be to accomplish. This took many commits and I think just shy of two weeks to accomplish for me (Time and Availability constraints included in this). It was no small feat, but just by the sheer virtue of having included it, all of those hacks were removed, and all bugs were resolved. As far as i’m concerned, i’m sold on SFML.
Commit 75: OMG, It’s starting to look like a game now
Getting texturing on the walls happening was a long and difficult road.
If you’re unfamiliar with how a raycaster works, it’s a fake pseudo 3D where you take a flat grid, place a marker somewhere in there representing the “camera” in any grid area marked with a 0, and give it an angle. You then take that angle, and scan left to right a series of straight lines in a fan like manner. Imagine Metal Gear Solid vision cones.
When each line eventually reaches a grid space marked with anything that’s not a “0”, you calculate the distance away from the player the hit took place. You then draw a line anchored to the center of the screen (*note: or wherever your horizon is supposed to be on the screen). The further away the hit, the smaller the line you draw. You then wind up with a visual on screen that looks like this
As much as I appreciated the retro aliasing look, I desired something sharper and more modern looking. I also needed to get textures on the walls.
I spent an inordinate amount of time researching the best and most lightweight way to load images from disk. For that’s what you do to get texturing to work: load an image from file, store it in memory, and bind the texture to geometry. But first you need to load the image, and while it’s possible to just include libpng or whatever you want to work with, the truth is there’s far less headache involved by just using an existing abstraction for this process. One of the few seemingly universally accepted passes to include a library, is to take care of loading common external assets.
One of the weird benefits I wasn’t even counting on when including SFML, was that it came with an image loader. When I saw that my jaw hit the floor. I spent way too much time researching, and was just moments from implementing smb_image.h, when I stubmled across that. I got to work.
So when it comes to 3D, and specifically OpenGL, you have a model, and a model consists of one or more faces. These faces can be flatly colored, or you can opt into texturing them. For a Quad (a polygon made of just four corners) is a single face, and it can be as simple as that to render a full sized image too. In fact this is how many video players work: it’s a full screen single quad, that applies a full sized texture which is a single frame from a video file, and continuously swaps the image being applied in real time.
But here’s the question I pose to you: While my walls may appear to be walls, (which would be 6 quad faces, or 12 triangles) how do you apply the “wall texture” image to the same wall block.. when the wall is comprised of 1 to 1000+ lines independently rendered to the screen?
That’s the second challenge for using a raycaster rendering engine.
The answer, in my case, was to reuse the same texture loaded in memory, and simply shift each line’s UV coordinates to match the ray angle and distance. This is what I wound up with, though I know it can be cleaned up even further (as this way literally days of experimentation)
sf::Texture::bind(&texture); int factor = 1, texW = 64; float r = rays.v ? rays.ry : rays.rx; float u = ((int)(r * factor) % texW); float u2 = u / texW; // Draw the lines glBegin(GL_QUADS); glTexCoord2f(u2, 0); glVertex2i(rays.r * lineW, lineO); glTexCoord2f(u2, 0); glVertex2i(rays.r * lineW + lineW, lineO); glTexCoord2f(u2, 1); glVertex2i(rays.r * lineW + lineW, lineH + lineO); glTexCoord2f(u2, 1); glVertex2i(rays.r * lineW, lineH + lineO); glEnd(); sf::Texture::bind(NULL);
Which gave way to this:
Commit 100: The Shoulders of Giants
For a milestone commit that I almost missed, it sure is an important one. Commit 100 represents a pretty big shift in how the entire engine operates, and it’s all in the service of accomplishing a relatively simple goal: Map Collision Detection. Basically, don’t let the player walk through walls.
I had hit a conundrum: I want the Player object to be able to request if it’s future bounding box is inside the confines of a positive collision geometry. But currently, there’s no way for the player object to be able to do that. All of the input handling code for player control happens in the player object that affects the players position, so it makes sense to handle it’s collision in there too to adjust it’s translation values correctly and prevent walking through walls. But there was no path this in program to allow the Player object to reach the “MapManager” class which did hold the map information in it.
I could have made the content static, or stick just the map info into gameState, but it didn’t feel right. The core of the games rendering code was still hanging out in MapManager.cpp, which never sat right with me, but the last attempt to move the rendering code proved disasterous. No, I needed to overhaul this.
I looked to inspiration from Unity. I really enjoyed Unity’s “Scene” and “GameObject” concept to create this simple visually obvious approach to a Scene Graph.
I also looked to Flash, as I really enjoyed working with Flash’s “MovieClip” system which were scriptable embedded independently moving flash timelines that could be manipulated by it’s parent timeline.
I knew what I had to do.
As much as I enjoyed working with UE4, I didn’t find much there that I wanted to replicate system wise. At least not yet.
As of this writing my current operation is refactoring the heck out of MapManager. It’s going from an instantiated class, to a static class acting as a bit of a facade for reading and parsing out map data and supplying a new or modified Scene object. This new Scene object will contain all applicatable map data, from the tilemap and collision map, to the desired textures (coming soon), and all entities (also coming soon) including the Player object. Yes, the map will be able to dictate where the player spawns, instead of being hard coded, finally.
Oh yes, and lovingly donated by the lovely Sifting, comes way of a new, though incomplete, map editor
So that’s where I’m at today. Branch feature/3 has proven to outweight itself as I dissect and move things around carefully, and I know I ought to create another branch to do this work, but right now it’s just me and my own contributions. This is the last feature before I release Alpha 4, and then work will commence on Alpha 5, which is where I hope to onboard the rest of the Redacted Games team.
Whoo! this post is getting long. Thanks for checking it out this far. I hope to be writing more about Black Lotus’s development more and more as things happen, and I have things to share!
Until next time,
A developer his entire adult life, Kyle spends his professional and free time finding new and interesting ways to solve the same boring problems (less he drive himself insane).
When not slinging code, he can be found being the happiest father and husband to the best little family ever!