Norris's picture

OpenAL generate tones at specific frequencies, and white noises

Hi !
Is it possible, using OpenAL, to generate tones (or wave sines ... sorry,I don't know very well how to tell in english) with specific frequencies ?
Also, is it possible to send different sounds in each "sides" (stereo) ?
I really want to generate them, not loading .wav ou mp3 files.
Thanks.


Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
smiley80's picture

You can generate a sine wave like this:

public sealed class SineWave : IDisposable
{
	private const int sampleRate = 44100;
 
	private readonly short[] audioData;
	private readonly int buffer;
	private readonly int source;
 
	public SineWave(double frequency, int seconds)
	{
		this.buffer = AL.GenBuffer();
		this.source = AL.GenSource();
 
		int frames = seconds * sampleRate;
		this.audioData = new short[frames];
		for (int i = 0; i < frames; i++)
		{
			this.audioData[i] = (short)(short.MaxValue * Math.Sin((2 * Math.PI * frequency) / sampleRate * i));
		}
	}
 
	public void Dispose()
	{
		AL.DeleteBuffer(this.buffer);
		AL.DeleteSource(this.source);
	}
 
	public void Play()
	{
		AL.BufferData(this.buffer, ALFormat.Mono16, this.audioData, this.audioData.Length * 2, sampleRate);
		AL.Source(this.source, ALSourcei.Buffer, this.buffer);
		AL.SourcePlay(this.source);
	}
}

For stereo, the audio data has to be interleaved, i.e. first a left channel sample then a right channel sample. And you have to use one of 'Stereo*' formats in 'BufferData'. E.g. generating a different tone for each channel would look like this:

public sealed class StereoSineWave : IDisposable
{
	private const int sampleRate = 44100;
 
	private readonly short[] audioData;
	private readonly int buffer;
	private readonly int source;
 
	public StereoSineWave(double leftFrequency, double rightFrequency, int seconds)
	{
		this.buffer = AL.GenBuffer();
		this.source = AL.GenSource();
 
		int frames = seconds * sampleRate * 2;
		this.audioData = new short[frames];
		for (int i = 0, x = 0; i < frames; x++)
		{
			this.audioData[i++] = (short)(short.MaxValue * Math.Sin((2 * Math.PI * leftFrequency) / sampleRate * x));
			this.audioData[i++] = (short)(short.MaxValue * Math.Sin((2 * Math.PI * rightFrequency) / sampleRate * x));
		}
	}
 
	public void Dispose()
	{
		AL.DeleteBuffer(this.buffer);
		AL.DeleteSource(this.source);
	}
 
	public void Play()
	{
		AL.BufferData(this.buffer, ALFormat.Stereo16, audioData, audioData.Length * 2, sampleRate);
		AL.Source(this.source, ALSourcei.Buffer, this.buffer);
		AL.SourcePlay(this.source);
	}
}
Norris's picture

Hi Smiley80 and many thanks for your code !

I waited not for a complete code, but that's very cool from you ! No I have all I need to search and understand.

I know nothing about OpenAL and I never read the specifications (I will do for sure, I'm not here to copy/paste without understanding). But I have read some DirectSound equivalents, and the guy use two buffers; one for the left and one for the right channel. I think it's to prevent some shift in the buffer. Do you think it's for that reason ?
It seems that it's a good idea, but I really don't know why.

Many thanks. I will try this :)

heeeuuuu ... sorry for my bad english !