Entropy's picture

LinkSphere

LinkSphere is my current project.

LinkSphere is a shmup/RTS/music game.

LinkSphere is something like a cross between Rez, Darwinia and Gate 88.

Ideally, it will be playable on-line, and customiseable for music and other aspects of the game.

A friend of mine is handling the actual music - hopefully, while the rest of his course mates are handing in their soundtrack pieces on CD's this coming May, he'll be handing in a (probably single-player) LinkSphere Alpha release and saying "Play this!"

I'm not releasing source code at the moment for a number of reasons -- not least that it's embarrassingly messy and probably follows poor C# conventions (should I really be using so many public static classes?) -- but see the screens below.

At the moment, a separate thread plays a steady "thump, thump, thump, thump" at 120 BPM, with off-beat hiihats playing when the Avatar moves. Eventually, I'd like every entity in the game to play part of a tune when they are close enough to the Avatar, mixing together along with the sound effects to make the music. Most movements and some of the Avatar's more intricate movements will happen on the beat, so the player should feel like they are playing "to the beat".

For on-the-fly music generation, I obviously need a pretty accurate timing thread, so I'll have to be careful to optimise the code. If anyone's feeling helpful, do the have advice on how to keep accurate timing? Obviously, Thread.Sleep(...) isn't the most accurate, and in my experiments is liable to throw everything out of sync if for some reason the thread executes a moment too late. I keep reading that DateTime.Now.Ticks is reportedly inaccurate and slow to call. I'm working on a possible solution right now, which I'll post tonight if I can, as I'd like some feedback on how well people think it'll work.

Edit: Screens (apologies for the glitching - looks like "Print Screen" is capturing the image in mid-refresh)


AttachmentSize
LinkSphere1.jpg32.47 KB
LinkSphere2.jpg27.61 KB
LinkSphere4.jpg52.56 KB

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Entropy's picture

Quick post to say that work still progresses on LinkSphere, albeit slowly.

I recently Windows XP again, which I dual boot now with Ubuntu. I was running a second thread to keep time for the music, but I discovered that although the beat played steadily in Ubuntu on Mono 2.x, the timing stuttered pretty badly on XP using .NET 3.5. I don't know if this is down to the OS or the efficiency of the runtime environment, but I basically had to rethink the core functionality of the game.

I tried Kamujin's Interleaved Scheduler to see if the yeilded CPU time would improve the timing of the music thread, but somehow it was even worse in XP - even the updates per second (should be constant at 30) varied wildly (I assume it's my code rather than Kamujin's algorithm, but when I get more time I'll be happy to test that further if wished). Eventually, I did what I should have done in the first place - modified the OpenTK GameWindow class and added a RunBPM method, which is a copy of Run(... with a few extra lines of code:

#region void RunBPM()
public void RunBPM(double updates_per_second, double frames_per_second, double BPM)
{
    if (disposed) throw new ObjectDisposedException("GameWindow");
    try
    {
        ...
        if (BPM < 0.0 || BPM > 200.0)
            throw new ArgumentOutOfRangeException("BPM", BPM, "Parameter should be inside the range [0.0, 200.0]");
        ...
        double Target16thBeatPeriod = 60.0 / (BPM * 16.0);
 
        Stopwatch update_watch = new Stopwatch(), render_watch = new Stopwatch();
        Stopwatch beat_watch = new Stopwatch();
        double time, next_render = 0.0, next_update = 0.0, update_time_counter = 0.0;
        double next_16thBeat = 0.0;
        int num_updates = 0;
        ...
        while (!isExiting)
        {
            ProcessEvents();
 
            #region UpdateFrame Checking
            // Raise UpdateFrame events
            ...
            #endregion
 
            #region RenderFrame Checking
            // Raise RenderFrame event
            ...
            #endregion
 
            #region 16thBeat Checking
            time = beat_watch.Elapsed.TotalSeconds;
            if (time > 1.0)
                time = 1.0;
            time_left = next_16thBeat - time;
            if (time_left <= 0.0)
            {
                next_16thBeat = time_left + Target16thBeatPeriod;
                if (next_16thBeat < -1.0)
                    next_16thBeat = -1.0;
 
                beat_watch.Reset();
                beat_watch.Start();
 
                On16thBeat();
            }
            #endregion
            ...
        }
    }
    ...
}
#endregion
 
#region On16thBeat
public virtual void On16thBeat()
{
    //Override this in child class
}
#endregion

With careful listening, it still sounds a little off every now and again. At 30 ups, 60 fps and 120 BPM, the slight variations between measured time-lengths of 16th beats iron themselves out, fopr the most part. 32nds or 64ths would in theory be even more accurate, but when I go that short, the update and rendering methogs start messing up the timing and throw everything out. Setting the thread priority higher doesn't help (quite the contrary actually).

I'm actually starting to wonder if C# is powerful enough for a music game. Still, for now I've a May deadline for a basic proof-of-concept alpha release for my friend's music tech project, and given that I'll most likely be playing a bunch of one-or-two-bar clips, the resolution should hopefully be further improved (rigth now I'm playing one-note drum samples on each beat with hihats on the off-beats whenever the Avatar is in motion). Beyond May, if I haven't got the timing perfect, I might consider switching languages (especially if I decide to start looking at synthesising music on-the-fly, which is a concept I've had running around in my head for ages).

If there's one thing I've learnt so far from this, beside the programming skills, it's that our ears keep time much better than our eyes. At 30 ups, the Avatar appears to pulse right on time to the beat.

objarni's picture

You don't have to be concerned at all with the clock of the computer, or FPS or anything that has to do with timing. It will result in the kind of stuttering/off-beat-playback you are describing.

What you do is have an looping "buffer"/"sample" that is filled-up/updated either in the main loop or by a second thread. That way you don't have to look at the PC clock at all. Stutter-free. The update of the buffer is called "mixing" in sound programming land.

Question is: does OpenAL support this kind of buffer approach? I don't know enough OpenAL to answer you -- maybe Inertia or someone else does?

Inertia's picture

The stuttering in playback is a problem related to the default driver, help can be found there: http://www.opentk.com/doc/chapter/1/troubleshooting (this is why it works fine on ubuntu, but not so well on windows if you have no creative labs card)

There's the command AL.SourceQueueBuffers() which might be worth taking a look at, but in general you will want to merge as much individual samples into a single one as possible and play around with pitch/frequency parameters of the source to increase/decrease bpm.

An OpenAL context is not like GL, it will keep mixing the sources you specified until their buffers are played back completely. Games usually only update the sources 20 times per second, with source velocity vectors set correctly you cannot tell the difference to 100 updates/sec.

objarni's picture

If there's one thing I've learnt so far from this, beside the programming skills, it's that our ears keep time much better than our eyes. At 30 ups, the Avatar appears to pulse right on time to the beat.

Yes that is a key observation!

The information passing through our eyes is like a wide river - it runs slow but a lot of water per second (graphics).

The information passing through our ears is like a water hose - it runs much faster but the net effect is a lot less information per second (audio).

That is why you should not be concerned with millisecond timing - it is not high enough precision for our ears! I would hazard we perceive "an incorrect rythmic beat" if it differs from the exact beat with sub-millisecond precision. (BTW I've played drums for a lot of youth years..). It would be interesting to investigate just how high precision our ears has in this regard, for example playing 16 beats per second and offseting one of them with a certain small delta -- at how long delta time does the user start to heard that the beat is "not synched"?

Don't think about clocks - think about bytes instead. How many bytes do you want to send to the sound card per second? How often do you then need to update the buffer? How will you update the buffer, in the main loop/idle event or in a second thread? How will you handle synchronization, if you choose the latter?

Entropy's picture

So if I have a pre-rendered sample or samples of exactly 2000ms (1 bar at 120 bpm) in length, and have the sound card play that as a loop, uploading any necessary modifications (I could even have multiple buffers and alternate their playback/upload) then I don't have to worry about accurately scheduling my AL.SourcePlay() calls?

Thanks for your great advice, Inertia and objarni. Sorry for being a bit dense with this. OpenAL is a completely new and alien land to me.

the Fiddler's picture

If you haven't done so already, check out OpenAL Soft. Its Windows implementation is much more reliable than the one found in Creative's website.

objarni's picture

Entropy - since I'm a complete noob (almost) at OpenAL - take my advice with extreme care regarding that API, but here we go:

1. The SourcePlay()-API start to play a "source" somewhere in space, that is associated with a "buffer" (often known as a sample in sound programming land), right?
2. So you should to SourcePlay() once and only once per application-boot (or level)
3. You should then concentrate on updating/modifying the actual bytes of the "buffer" assigned to that source.

If this is possible in OpenAL is a question I cannot answer. Why not just try it?

Entropy's picture

Please disregard my post above. I've been wracking my brains about this, and I think I've finally got it. Thanks again for your advice.

When loading my samples from sound files, I can store the buffers in system memory, and "chop them up" into short segments - perhaps 1/16 beats in length. Then, to play them back, I simply queue them to the sources in sequence. The only thing is that when a source doesn't have anything to play back, I'll need to queue "blank" buffers of the same length to keep them playing (and therefore keep everything in sync). That way, I only call AL.SourcePlay() once for each source when starting up.

I should be able to queue several buffers in advance, as everything's happening more or less to the beat - player actions and their accompanying "sounds" included - so any delay simply adds to the effect of keeping the rhythm. I should be able to squeeze that into the main thread, but if it becomes an issue, I'll handle the OpenAL context in a separate thread.

Stop me if any of the above sounds wrong.

Entropy's picture

@objarni Thanks again :-) just saw your last post after posting the above. The tricky part of using just one source is mixing the data of the buffered samples into a single buffer. OpenAL should be able to handle that in theory, as that's what it's effectively doing when it plays output from multiple sources, but unless I can figure out how to do hat, I'll need to use separate sources. I don't know exactly how much overhead is involved in calling unmanaged functions from managed code, but if it's audibly noticeable for multiple consecutive AL.SourcePlay() calls, I could write a one-function C "library" to call alSourcePlay() for the all the sources at once from unmanaged code.

the Fiddler's picture

I don't know exactly how much overhead is involved in calling unmanaged functions from managed code, but if it's audibly noticeable for multiple consecutive AL.SourcePlay() calls, I could write a one-function C "library" to call alSourcePlay() for the all the sources at once from unmanaged code.
Don't bother, the cost is a few ns per function (typically 8 - 20ns on a modern machine, with Mono being a little faster than .Net).