I’m currently working on the basic game engine for Kung Fu Legends. I’m the only person working on the game and I’ve never successfully made a whole game engine, so progress is slow. To my delight I have one of the main subsystems up and ready – the graphics subsystem, powered by the Ogre3d library. I can change the background colour and display text on the window, which is pretty basic but it means it’s working. I’m also doing a lot of work of figuring out the software flow for the game engine (separate from the game world simulation) and I thought I’d talk about some of the ideas I’ve had there.
I’m taking the development of Kung Fu Legends pretty seriously, so I’ve organized the programming environment properly. Usually I’ll just create some project and blat down a
main.cpp, maybe a few supporting files and see how it goes. Here’s a cropped screenshot of my Eclipse project showing how I’m organizing my source. It’s nothing special, but I’m keeping a neat divide between the different subsystems and each directory corresponds to (more or less) one Manager for that subsystem. The design for the engine has the GameManager kick off.
It initializes all the subsystems (logging, options, graphics, input, UI, sound…) in order. Once the engine itself is up and running, control transfers over to the WorldManager to initialize the game world and all the gameplay systems required. This cascading model allows me to drop out components if needed. I don’t have a strong dependency model but can enforce some requirements (for example, the UI manager will require the graphics to have been set up). Breaking the engine into subsystems like this allows me to maintain strict division of responsibilities so we know who does what and who to blame if it all goes pear-shaped. For example, if the UI needs to put a certain image on the screen, it makes calls to the graphics subsystem, who checks if the file is loaded. If not, it requests it from the File IO system, who asks the options subsystem where files are stored for this install. It’s not the UI manager’s job to do any of that other guff, it just asks of others to set it up for him while he does the bit he needs: putting an image at a certain spot. A bonus to this approach is if I develop a nifty way to load in files quicker, all subsystems benefit immediately.
This framework also allows me to set up a test environment separate from the game engine (this is the stuff under the “tester” folder). This can set up subsystems for a minimal testing regime and tweak things without polluting the main game manager. Making the change between test code and real code can be as simple as setting a compile variable (i.e. a
#define) and hitting “compile”.
I also have a separation of duties in terms of intent for flexibility and accessibility. You’ll notice a folder called “output”. This is how the game world interacts with the game engine. It emits certain output events. The OutputManager’s role is to convert these to appropriate UI elements. However, partway through development I built up a good friendship with a person who is deaf. At about the same time, I watched a video of Gabe Newell of Valve Software talking to a local deaf society about gaming and making games accessible. I thought it’d be neat to make my game capable of being accessible to a wide audience, including blind and deaf players. By abstracting the output process, I realized I could (on-the-fly) translate output events to appropriate outputs. My game isn’t intended to be a full 3D action game full of spectacle and flashy graphics, but more along the lines of interactive fiction, a turn-based RPG or a less chaotic version of The Sims. I can take the processing hit to gain this ability to reach a wider audience.
The vanilla setup of the game would display many game events via text, with sounds or graphical effects to accentuate the experience. In the background the OutputManager would have a variety of smaller Observers attached to it[1. When I say Observer here, I definitely mean the Gang-of-Four Observer Pattern.]. A SoundObserver would observe a SoundEvent and ask the audio subsystem to play a specific sound. For a deaf person’s setup, they can add or replace the SoundObserver with a SubtitleObserver which would instead output a subtitled explanation of the sound. A blind person’s setup could have an alternative series of Observers that instead of sending visual events to the fancy (custom) UI, would pipe textual description to files or pipes (for established screen readers), or to voice synthesis systems in the game itself. You can also process the output before you deliver it, so you can convert a line of text to its phonetic counterpart to help the text-to-speech systems sound better. Apart from having these options available all the time and be able to be changed on-the-fly, the best thing about this framework is that it’s optional but they can work in tandem. A single Event can invoke a sound, some textual output and subtitling. This means a user can customize their experience, and you can support neat setups like a deaf player and a regular player interacting with the same game at the same time and both getting the output most useful for them.
There are a few other neat tricks that fall under this framework. You can have an RSS Observer that will take in events and build up a fantasy RSS feed of your character’s travels. You can publish that RSS feed on the web and share your stories in diary form. You can also have a network relay Observer that will send your current events across the Internet to others so they can sit back and watch your story unfold in real-time with you, or perhaps collaboratively play the game with you.
I don’t think this is a particularly new idea, but I think it’s a neat one. Keeping it modular like this helps the development after you pay the small initial cost for the framework. Next time I talk about the Kung Fu Legends game engine, I might explain some of my ideas for the user interface and explain why I’m taking my time getting to the point where I can make pretty pictures on the screen.