Friday, 24 April 2009

ReactOS

I discovered the ReactOS project about 3 years ago, near the start of the 0.3.x series that is now nearing an end as the 0.4.0 milestone release grows ever nearer. For those of you who have not heard of the project, the following is a quote from the front-page of their web-site:

ReactOS® is a free, modern operating system based on the design of Windows® XP/2003. Written completely from scratch, it aims to follow the Windows® architecture designed by Microsoft from the hardware level right through to the application level. This is not a Linux based system, and shares none of the unix architecture.

The main goal of the ReactOS project is to provide an operating system which is binary compatible with Windows. This will allow your Windows applications and drivers to run as they would on your Windows system. Additionally, the look and feel of the Windows operating system is used, such that people accustomed to the familiar user interface of Windows® would find using ReactOS straightforward. The ultimate goal of ReactOS is to allow you to remove Windows® and install ReactOS without the end user noticing the change.

Please bear in mind that ReactOS 0.3.8 is still in alpha stage, meaning it is not feature-complete and is not recommended for everyday use.

ReactOS is to Windows NT as Linux and BSD are to classic UNIX - a free, open source implementation of a popular and useful operating system at zero cost. Programmers, FOSS advocates who nevertheless do not want to abandon Windows or anyone who is simply tired of Microsoft policies, paying for Windows licenses and on-line activation should all visit the project website and give the alpha releases a test-drive in an emulator.

Aside from submitting bug reports, posting on the forum and sometimes editing the project wiki, I don't actively participate in ReactOS development myself. However, with the upcoming release of Windows Internals 5th Edition, with contributions from ReactOS developer Alex Ionescu, and the new free time I have now that my University course is nearing it's conclusion, this may change in the future.

GNU/Linux

Until only a few years ago, I had no real experience of UNIX-based systems save for some minor University-related work. However, over the last few years I have spent some time attempting to improve my knowledge of the POSIX world as part of my ongoing efforts to remain platform-independent.

At University, my UNIX experience consisted of using SuSe as part of a Networking/Computer Architecture course and using PS2-Linux for PlayStation 2 game development. Neither were very interesting - the networking coursework was largely unrelated to my degree and was made needlessly difficult by requiring us to use the foreign UNIX OS in place of the Windows environment that we were familiar with. As for the PlayStation 2 development, our coursework was to create a Pong clone using a horrible mess of UNIX and PS2 library code under a KDE that insisted on crashing at least once a day. This course module required me to sign a non-disclosure agreement, possibly in order to prevent me from informing others of how terrible the Game Console Development course was. All in all, these experiences did not do anything to improve my opinion of UNIX versus Windows NT.

Outside of University, the only UNIX-related experience I had consisted of about 10 minutes with an Ubuntu LiveCD and whatever time I could spare to experiment with my new Apple Mac Mini and it's PowerPC processor. However, during my industry placement year (2007-2008) away from the University, I decided to experiment with UNIX some more and installed Xubuntu on to a spare hard disk drive. Forcing myself to use it for a few weeks wasn't as arduous as it might have been, due to the availability of some of my favourite applications from Windows, such as OpenOffice.org, Firefox, Thunderbird, VLC and Code::Blocks.

However, I encountered more bugs and general inconveniences under UNIX than I have ever had to deal with under a standard, out-of-the-box Windows XP installation. The X Window system never seems to work correctly and installation of graphics drivers is an annoying and error-prone experience to say the least. Also, VLC refused to run after Code::Blocks was installed due to incompatibilities with the newer wxWidgets library. So-called "DLL hell" might have been an issue under the DOS-based Windows 9x design but this problem is not present under the new (and largely unrelated) NT architecture. It is disappointing to see that it is still an issue under the allegedly superior UNIX design.

During this time, I played around with a few other operating systems, including the OpenSolaris LiveCD. Those of you who have an interest in alternative UNIX systems aside from GNU/Linux and BSD may wish to download this LiveCD for yourselves.

Shortly before I completed my year working as a programmer and general IT technician, the replacement student, David, arrived. David seems to be an excellent fellow, with a much better knowledge of UNIX than myself. During our month working together, he suggested that we completely reinstall our dying Red Hat server with Debian GNU/Linux.

The end result was not one but three Debian-powered machines:
  • A "test run" server, which had previously been retired from service and put into storage.
  • The "real" server, which ended up being used in tandem with the "test run" server.
  • A desktop machine, built by myself from spare components.
The desktop machine became my primary workstation, as David was now using the machine that had previously belonged to me. It was on this that I constructed a GUI front-end for the AMANDA backup program using Code::Blocks and the GTK+ library. I still have this on a CD, licensed under the GPL v3 but as I lack access to any sort of tape backup hardware I cannot continue to work on it.

GTK+ was interesting to work with but has some major flaws:
  • It's much more limited than the Win32 API.
  • It was annoyingly difficult to configure projects so that they could link and build correctly.
  • GTK+ is much less efficient than pure Win32 code under Windows.
  • It seems to be riddled with bugs, although this might have been caused by Debian 4.0 using an older version of the GTK+ development libraries for it's repositories. I'm fairly certain that the code I wrote was correct and bug-free and yet my application seemed to either crash or output garbage far more often than it should have.
Moving on to the present day, my room currently has five computers in it:
  • My main tower, which is currently running Windows Server 2008.
  • My old laptop, running Windows XP Home SP3.
  • My Mac Mini running MacOSX 10.3.9 (currently being stored on a shelf).
  • A spare tower, currently in a state of disassembly (previously ran Windows Server 2008).
  • My UNIX tower, currently running gNewSense 2.2.
The gNewSense tower runs quite happily with a 450MHz Pentium III processor and 256MB of RAM using the integrated Intel graphics chip. I have unearthed the copy of Introduction to Unix that my University gave to me as part of the Network course, installed Code::Blocks and managed to rig up a simple OpenGL/SDL program that renders an animated .MD3 mesh at an acceptably smooth frame-rate. The same Code::Blocks project file recompiles on my main Windows tower with no alterations save for using a different build target.

I've not been without problems, however - Debian 5.0.1, which I attempted to use before trying gNewSense, didn't seem to like my graphics chip and X.org refused to run. The gNewSense GNOME GUI runs fine, but there seems to be a fault with the hardware OpenGL acceleration that causes corruption around the mouse cursor. This is not usually a problem for me, since SDL windows do not draw the standard X window system cursor and so corruption does not occur.

Yesterday, the X window system crashed when I attempted to create a simple OpenGL/SDL window. The CTRL-ALT-BACKSPACE combination that I have become so familiar with had no effect and attempting to CTRL-ALT-F# to a terminal display was equally useless. After forcing the machine to power off, I restarted it, logged in and ran the program again. It worked flawlessly. What, then, was the cause of the X server crash?

Lastly, my router loses it's connection to the Internet whenever the gNewSense tower is online. This might be a problem with the router, a problem with the network card or a problem with the driver or version of the network card driver or Linux kernel used by gNewSense but for now I have simply disconnected the gNewSense tower from the network.

Saturday, 28 June 2008

Mobile Phone Gaming with Java.

I've been hearing a lot about "Casual Games" recently and have been thinking about looking into making some myself. While developing for the PC or Apple Mac is probably the simplest way to do this, it's not very interesting since I do that all the time anyway. Developing games for mobile phones, however, is much more novel and also more challenging because of the comparatively limited power, so I decided to start investigating how to do this. After downloading and installing the J2SE and J2ME development kits from Sun, I began experimenting with the provided phone emulator and my actual PDA to see what could be done.

As John Carmack wrote, a major obstacle when designing games for phones is the method of input. For example, on my Phone/PDA all I have to work with is a D-pad with a "Fire" button in the middle. First of all, having the fire button in the center of the movement buttons is not a good design for any sort of game where input needs to be quick and precise, and, secondly, my phone detects only one button press at a time, so there can be no diagonal movement and no movement while pressing fire. These are serious limitations for any sort of dynamic game.

One solution that I have come up with is to use the touch-screen instead of the D-pad. This allows the player to control their movements with the stylus and use the fire button without the D-pad interfering. Unfortunately, not all phones have a touch screen, so designing a game that relies on a stylus for input would prevent it from being played on several devices. Also, the phone emulator provided with the J2ME development kit does not provide touch-screen emulation.

Turn-based gameplay seems to be the best idea here. Something like Advance Wars or a Final Fantasy 6-style RPG would work well with a bit of thought. For dynamic games, things such as Pong or Pac Man would work well because there is no need for a Fire button and only one directional button needs to be pressed at once.

Sunday, 16 March 2008

An overdue update.

It's been a while since I posted something here, so here's an update on my progress.

Firstly, the bad news: The SURGE game engine project has now been cancelled. It's a pity, because it got quite far, as can be seen from this features list:

  • Milkshape 3D .MS3D support for ragdoll actors. Animation data was ignored. Automatic texture loading was implemented.
  • Quake III .MD3 support for non-ragdoll actors. Automatic texture loading was implemented.
  • Incomplete Quake III .BSP support for the game world. Bezier surfaces were neither drawn nor added to the world collision data (though Polygon and Mesh surfaces were) and the actual BSP visibility culling data was ignored by the renderer. "Flare" (or "Billboard") surfaces were not drawn either. Automatic texture loading was implemented.
  • Simple .TGA support (only uncompressed 24- or 32-bit .TGA files were allowed).
  • A basic actor system that could handle either skeletal or morph actors transparently.
  • Realistic physics via ODE.
  • Cross platform. Uses only APIs that are available on several platforms (OpenGL, SDL, ODE, etc.).
  • Optional multithreading (to speed up physics calculations on multi-core machines).
  • Some basic audio.
There are a few reasons why I stopped working on this project but one of the main reasons was that ODE 0.8 and 0.9 seem to have a bug that causes the physics system to crash under certain scenarios. This bug seems to be quite rare with any sort of collision other than trimesh-trimesh. However, it happens almost instantly whenever two trimeshes touch each other - for example, when a ragdoll touches another ragdoll, or the world (I represented the world BSP data as an ODE trimesh). As my engine was designed on the basis that ODE would always be the physics system used, switching to another physics engine would require me to essentially remove about 50% of the code from the engine.

Now that I come to look back on the project a few months later, the coding wasn't very good. Graphics, logic and physics code were all intermixed and not seperated out very cleanly. The fact that ODE was so integral to the engine working correctly is testament to this.

However, after talking a break for a few months, I'm now working on a completely new project that will hopefully be much better, codenamed the "Tidal" project. The name was chosen because it continues a sort of theme - Edd has been working on the Drift engine and my previous engine was called the Surge engine, words which I think have a sort of oceanic quality to them (to be set adrift, the surge of the tides, etc.), so I decided to continue my naming along these lines.

Tidal is currently almost at the same point that Surge was before I abandoned it:

  • Extremely modular design. This is to avoid the problems I had with Surge being too reliant on ODE. The motto for Tidal is "Don't rely on the API." It also makes the code cleaner.
  • Cross platform. The engine's .h files don't need any specific APIs (unlike Surge), meaning that their partner .c files can implement them however they want. For example, graphics.c could use OpenGL, DirectX or even software rendering to implement the core rendering functions specified by graphics.h.
  • Written entirely in strict ISO-C90 compliant code. There's no real reason for this aside from I feel that it'll make development more interesting.
  • Milkshape 3D .MS3D support (although it's not actually used for anything yet). Most of the code for this was taken from Surge.
  • Quake II .MD2 and Quake III .MD3 support. Automatic texture loading has been implemented in a much better way. MD2 files have a small issue with some texture coordinates being wrong due to the way I've loaded the MD2 data into structs designed to hold MD3 models but this is not really a problem right now, especially since MD3s should be the main "morph" mesh format used anyway. Most of the MD3 code was taken from Surge.
  • Incomplete Quake III .BSP support for the game world. Most of the code for this was taken from Surge.
  • Simple .TGA support (only uncompressed 24- or 32-bit .TGA files are allowed). Most of the code for this was taken from Surge.
  • Realistic physics.
  • New ActorClass-based actor system. Complete, aside from scripting (which I'm working on now). Currently, ragdoll actors are not supported.
  • Very incomplete features include: Shaders, audio, networking and a GUI system.
Once the scripting system is finished, it should be possible to make a simple game. I'll be adding support for more file formats as the project goes on.

Sunday, 7 October 2007

Ragdolls with Lip-Sync.

An idea I've been thinking over for having a skeletally animated mesh with the ability to lip-sync to speech.
  • Imagine a human ragdoll mesh. The head can be represented by a single body. If we remove the head (but nothing else, not even the neck), we still have a fully functioning ragdoll. It is important to note that the neck joint to which the head was attached should not be removed. Let us call the neck joint "NTAG".
  • Now imagine that the head mesh that we have removed is stored as a separate file in MD3 format. This MD3 file contains a single "tag", which is also called "NTAG".
  • When the ragdoll is loaded, the head mesh is loaded too and is rejoined with the body by linking the MS3D joint "NTAG" and the MD3 tag "NTAG".
  • Well, what have we achieved? Not much - we have the same model as before, only now the head is stored as a frame-by-frame model while the body is stored as a ragdoll.
  • Next step: For the MD3 head model, create a separate animation frame for every sound the human mouth can make. Each frame should be named appropriately (eg, the animation frame of the head making an "eeh" sound should be named "eeh", etc).
  • Using SDL_mixer, load some speech sounds into the project.
  • Hook up an SDL_mixer sound channel to an output preprocessor function. This allows the user to access any raw sound data for that channel just before it is played.
  • Analyse the speech WAV as it is played using this callback function. Whenever an "ahh" sound is played, set the MD3 head mesh to the frame named "ahh", etc.
  • To improve the effect, use linear interpolation for the MD3 frame transitioning to smooth the mouth movement and make it look more natural.

Progress on ragdolls.

Introduction.

I've made some good progress with my MilkShape-ODE Ragdoll project since I last posted here. Currently, the project can load a MilkShape3D model, find and load any associated textures and create a set of linked joints and bodies to form a skeleton. When the model is rendered, the positions of it's vertices are transformed based on the skeletal data.

Terminology.

ODE Bodies are simply points in space with a "mass" value. They can be joined together with other bodies using "ODE Joints". To illustrate how this works, imagine that your upper arm is one "body" and your lower arm is another, with your elbow being a "joint".

Bodies cannot collide with each other, as they are used to simulate dynamics, not collision. However, they can be paired with an "ODE geom". Geoms have no dynamics data (such as mass or inertia) but do have collision data. So if you were to create a bowling ball in ODE, the ball's body would be what gravity pulled down on - but the ball's geom would be what prevented gravity from pulling it through the floor.

Please note that ODE joints are not the same thing as MS3D joints. In ODE, vertices are associated with ODE bodies. Two ODE bodies can be linked together with an ODE joint. In MilkShape3D, there is no such thing as a body - every vertex in a model is linked to an MS3D joint. An MS3D joint may have a parent joint or may be independant. When an MS3D joint is moved or rotated, all vertices associated with that MS3D joint move with it (see my last post for more details on how MilkShape3D joints work). So, MS3D joints are roughly a combination of both ODE joints and ODE bodies.

How the skeletal data is generated.

The first step is to generate a list of ODE bodies. This is done fairly simply: the number of ODE bodies created for a model is equal to the number of MS3D joints specified in the mesh file. ODE bodies are positioned by adding the position vector of every associated vertex together and then dividing the result by the total number of vertices used, so they are generally positioned around the center of a "vertex cloud". Any ODE bodies which do not have any associated vertices are simply positioned at [0, 0, 0], to avoid a divide-by-zero error.

The next step is to join these bodies together using ODE joints. This is not quite as simple as generating ODE bodies. There are two reasons for this:
  • MS3D joints are joined to other MS3D joints, while ODE joints are connected to ODE bodies.
  • MS3D joints do not need to have a parent, whereas if an ODE joint is not attached to anything then it will join itself to the environment. This has the same effect as nailing something to a wall, because the environment is the world - meaning it doesn't move.
The answer here is to create an ODE joint for every MS3D joint that has a parent, join it to the ODE body with the same index value as that MS3D joint (so if you were working with MS3D joint #3, you would join the current ODE joint to ODE body #3) and that MS3D joint's parent MS3D joint's ODE body. This is not as complex as it sounds - it only becomes difficult because there are less ODE joints than there are MS3D joints.

Current work.

There are still a few things missing from the simulation yet. Firstly, the ODE bodies do not have any mass values set. I think that the best solution would be to assume a uniform density value for every body in the mesh (for example, 0.4) and then approximate the total area covered by the vertices of an ODE body using a cuboid or capped-cylinder shape.

Secondly, no ODE geoms are generated yet and so there is no collision. Originally, I was planning on approximating ODE collision geoms in the same way as I would approximate mass but, while I can get away with comparatively slight inaccuracies in mass approximation, collision errors are more "visible" to the end user. My current thoughts are to generate a separate trimesh for each ODE body in the mesh. This would be very accurate, but it could be computationally expensive.

Possible collision problems.

Neither of the above approaches really solve the problem of triangles which span vertices which are associated with different bodies. While most triangles in a mesh belong to a single ODE body, triangles which are used to join the vertices of two ODE bodies together (such as "elbow triangles" which stretch from the upper arm body to the lower arm body) will not have any collision data.

Possible solutions are to create an ODE sphere geom for each ODE joint or to generate a trimesh for the entire model every time it changes. The first approach could lead to redundant ODE geoms being created for joints in locations such as at the top of a string, several feet away from a "puppet" model below and would also require me to find a way to guess what the radius of the sphere should be. The second approach could have issues if it interferes with the "temporal coherence" data that ODE uses for trimesh collision.

Sunday, 16 September 2007

Skeletal Animation and Ragdoll Physics using ODE and MilkShape3D

Introduction:

I haven't got around to working with skeletal animation or ragdolls before, though I am aware of how the theory works and I am familiar with how it could be implemented using the Open Dynamics Engine.

In addition, when working with 3D models, I have always used Quake 1 .MDL, Quake 2 .MD2 and Quake 3 Arena .MD3 files. These formats use "mesh deformation" or "frame-by-frame" animation techniques which make them unsuitable for ragdoll work.

For those of you who are not familiar with the terms "mesh deformation" or "skeletal animation" I offer this explanation:

Mesh Deformation

In "Mesh Deformation", a model is animated in the same way as a flip-book. A flip-book animation of a running stick figure would consist of, for example, 5 different drawings of the stick figure, each one with it's arms and legs in slightly different positions, so that when the drawings are cycled through quickly it appears that the figure is moving. This is why "mesh deformation" is also known as "frame by frame" animation - a 3D model which uses this technique would actually consist of several slightly different models (in the same way that the stick-figure consists of several slightly different drawings) but only one of these models would be shown at a time (in the same way that only one stick-figure drawing would be visible at any precise moment). The model would cycle through these separate frames of animation much like a 3D film, creating the illusion of movement.

This basic technique has a slight problem, though - if the frames of animation change too slowly then the model will appear to be moving jerkily, as the user is able to make out each frame individually, rather than see the overall "effect" of the frames as they change. A perfect example of this is in the original Quake 1 game, where monsters move in a rather jerky fashion no matter how high-end your computer is in terms of graphics power.

There are two solutions to this:

The most obvious approach is to simply use more frames of animation, more rapidly. This means that, for example, the running stick-figure now has 10 frames of animation rather than 5. This allows the artist to create "in between" animation frames for the animation so that the movement of their stick-figure looks more fluid. This technique can be used for both flip-book animations and 3D meshes. However, this approach not only requires the artist to do a lot more work but a (drawing/model) with 10 frames of animation will require twice as much (paper/computer memory and disc-drive space) than one with only 5.

The alternative, used in Quakes 2 and 3, is to use "linear interpolation" (also known as "lerping"). This essentially means that, if a game engine should show animation frame #1 at time=30 seconds and animation frame #2 at time=34 seconds and the current time is 32 seconds, the game will work out a new position for every vertex in the model, exactly halfway between the positions specified by animation frame #1 and animation frame #2. The artist is only required to create a few "key frames" for the model's animation and the grunt work of creating "in-between" animation frames is handled automatically by the game engine. Even better, as this work is done "on the fly", the model will not only transition smoothly between frames 1 and 2, but frames 4 and 9, 7 and 3 or anything else - useful in games, where you may want to go from a running animation to a death animation, a running animation to a shooting animation or a running animtion to a crouching animation. This is the same technique used by Adobe Flash for animating shapes, although Flash refers to it as "tweening" (as it is used to do the inbe-tweening of key-frames).

Skeletal animation.

"Skeletal" (also "jointed" or "boned") animation is a truly distinct concept from that of mesh deformation.

First, a series of "joints" is created. A joint consists of a unique joint ID (such as a number or a name string), a parent ID (similar to the above, but instead of referring to itself it refers to the unique ID of another joint), an XYZ position in 3D space and an angle (probably stored as a quaternion or a matrix).

If a joint does not have a parent - for example, joints may be numbered starting from 1 and the parent ID of the joint is set to zero - then it is completely independant of external influences and it can be positioned or rotated however it wants to be. If the joint does have a parent joint then it's position and angle will be influenced by the position and angle of that parent; It can be referred to as a "child" joint. An imaginary straight line, connecting a parent joint to a child joint, is called a "bone" and a collection of bones forms a "skeleton".

To illustrate, imagine that you have one "parent joint" positioned at your shoulder and another "child joint" positioned at your elbow. When your shoulder joint is rotated, your elbow joint is moved - but when you rotate your elbow joint, your shoulder joint does not change position. Now imagine that your enitre body is "jointed" in this way. Every joint in your body can be represented as either a parent joint, influencing others, or a child joint, being influenced. Some joints may even be both - your elbow may be the child of your shoulder but it is also the parent of your wrist. Additionally, parent joints may have multiple children - for example, one wrist influences five fingers.

Now that we understand how joints can influence each other, we can go on to explain how models can use jointed techniques in animation. Once an artist has created a skeleton for a character, they can create polygons around the bones as "flesh". Every vertex in the model is then mapped to a joint, so that, when the joint moves or rotates, the vertices will move along with it (in much the same way that a child joint will move along with a parent joint).

The result is a model with a skeleton that acts in a similar manner to that of a human. The model can be loaded into a computer game and the physics engine will be able to simulate forces acting upon the model's skeleton in the same way that forces such as gravity influence our own skeletons in reality. Combine this with collision detection and it becomes a "ragdoll" of the kind seen many recent video games.

More recently, the idea of using a system of "weights" to provide more realistic skeletal animation has become popular. However, this is not an area into which I have done much research as of yet.

"Tagged" Animation

To be completely fair, Quake 3 .MD3 meshes do allow a basic form of "jointed" animation (in addition to the legacy frame-by-frame technique which they have inherited from the earlier MDL/MD2 formats).

Each player model in Quake 3 consists of 4 parts: A "head" model, an "upper body" model, a "legs" models and a "weapon" model. These are stored as separate .MD3 files, each of them containing a list of "tags". A "tag" is simply an XYZ position, an angle stored as a 3x3 matrix and a name string (which is used as a unique identifier). This makes them almost, but not quite, the same thing as joints, as they do not have "parent tags" - each tag is influenced by nothing but itself. To connect the head model to the upper body, both the head and the upper body files will contain a "neck" tag. When the models are loaded, the head mesh is repositioned so that it's "neck" tag has the same position and orientation as that of the torso "neck" tag.

There are disadvantages of this approach:
  • As I have already mentioned, tags cannot have parents and so skeletons and bones cannot be formed.
  • Models must be scattered across several files rather than grouped into a single, self-contained one.
  • Triangles cannot use any vertices which are stored in seperate files. This means that each mesh file will appear to be physically unconnected with it's fellows.
It is worth noting that id Software (the creators of the Quake series) were originally going to use skeletal animation for .MD3 meshes, storing animation sequences in separate .MD4 files. However, this was never finished and both the .MD4 file format and Quake 3 skeletal code remain incomplete (this is why Quake 4 meshes are called "MD5" - to prevent confusion with the half-finished MD4 skeleton specification).

The Project.

My current project aims to allow any skeletal mesh formation to be loaded into my physics engine and be treated as a ragdoll. Unfortunately, models stored in Quake .MDL, Quake 2 .MD2 and Quake 3 .MD3 format do not contain any skeleton data and are thus unsuitable for ragdoll models, meaning that I cannot re-use my old mesh loading code. Instead, I have opted to use the MilkShape3D binary ".ms3d" format. It is a very simple format and yet it contains all the information needed to generate a ragdoll and render it.

At time of writing, I have finished creating a simple .ms3d loading API written in C (which loads almost everything aside from the joint animation data, which is not needed as I only require the joint's positions and parent data) and I am writing a simple test application using ODE, OpenGL and SDL.

As I am not a skilled modeller, I have decided to borrow the "Angelyss" model from OpenArena and create a skeleton for it rather than create a whole new character from scratch. So far, I have imported the .MD3 file and created a skeleton but have not yet finished assigning vertices to joints. Click here to see the image.