Audio Streaming
Learn how to stream and buffer audio data
Overview
flutter_soloud supports streaming audio data while receiving it in real-time. The supported audio data formats are raw PCM or compressed through the Opus and Ogg libraries from Xiph.org. This is particularly useful when:
- Streaming audio from network sources
- Generating audio data on-the-fly
- Processing audio in chunks
- Automatically pause when buffering is needed and resume playback when enough data is available
- Working with OpenAI or other streaming APIs
The Opus and Ogg libraries are embedded by default in flutter_soloud. However, if you don't need streaming capabilities, please read the Without Opus/Ogg section for how to exclude these libraries from your app.
Buffer Stream Setup
Initialize an audio stream:
final stream = SoLoud.instance.setBufferStream(
maxBufferSizeBytes: 1024 * 1024 * 10, // 10MB of max buffer (not allocated)
bufferingType: BufferingType.preserved,
bufferingTimeNeeds: 2.0, // 2 seconds to buffer before unpausing
sampleRate: 44100,
channels: Channels.stereo,
format: BufferType.s16le,
);
Parameters:
Parameter | Description |
---|---|
maxBufferSizeBytes | Maximum buffer size in bytes. When this limit is reached while adding audio data, the stream is considered ended (as if setDataIsEnded was called). Playback will stop at this point unless looping is enabled. Internally, all data is stored as floats, regardless of input format. This does not allocate memory upfront; it only limits the total data that can be added. |
maxBufferSizeDuration | Alternative to maxBufferSizeBytes , specifies the maximum buffer size as a duration (in seconds), calculated using sampleRate and channels . No memory is allocated upfront. |
bufferingType | Determines how buffering works during playback. See below.BufferingType.preserved : Keeps all audio data in memory, allows multiple playback instances, supports seeking and looping. BufferingType.released : Frees memory of already played data, allows only a single playback instance, and must be manually disposed. |
bufferingTimeNeeds | Buffering time required (in seconds). If playback reaches the end of the current buffer, it will pause and wait until enough data is buffered to cover this time. Note: With BufferingType.released , the stream position is always 0; use getStreamTimeConsumed to get elapsed time. |
sampleRate | Sample rate for playback (e.g., 22050 or 44100 Hz). For opus format, valid values are 48000, 24000, 16000, 12000, or 8000 Hz. Incoming data is resampled to this rate. |
channels | Number of audio channels. The opus format supports only mono and stereo. |
format | Audio data format. Options: f32le , s8 , s16le , s32le , or opus (Opus codec with Ogg container). Opus supports 48, 24, 16, 12, and 8 kHz sample rates and mono/stereo. |
onBuffering | Callback triggered when buffering starts (isBuffering = true ) and ends (isBuffering = false ). Receives the playback handle and the current buffered time (in seconds). |
Preserved Mode
final stream = await SoLoud.instance.setBufferStream(
bufferingType: BufferingType.preserved,
// ...other parameters
);
- Keeps all audio data in memory
- Allows multiple playback instances
- Supports seeking and looping
- Higher memory usage
Released Mode
final stream = await SoLoud.instance.setBufferStream(
bufferingType: BufferingType.released,
// ...other parameters
);
- Frees played audio data
- The position of the stream is always 0
- The seek method is not supported
- Single playback instance only
- Lower memory usage
- Must be manually disposed
WARNING: as you can see, the position of the stream in released
mode is always at start. This means that getPosition
always returns 0. To get the already played time, you should use the getStreamTimeConsumed
method instead (basically it is the current position). Also, seek
is not supported in released
mode.
Please, look at the example/lib/buffer_stream/simple_noise_stream.dart
example for a simple implementation to understand how audio stream works.
Supported Formats
s8
- Signed 8-bit PCMs16le
- Signed 16-bit PCM (little endian)s32le
- Signed 32-bit PCM (little endian)f32le
- 32-bit float PCM (little endian)opus
- Opus codec with Ogg container
Adding Audio Data
// Add audio data to the stream
SoLoud.instance.addAudioDataStream(
stream,
audioChunk, // Uint8List of audio data
);
// Mark the stream as complete
SoLoud.instance.setDataIsEnded(stream);
Buffer Management
// Get current buffer size
final size = SoLoud.instance.getBufferSize(stream);
// Reset the buffer
SoLoud.instance.resetBufferStream(stream);
Example: Network Streaming
// Create a WebSocket connection
final socket = await WebSocket.connect('wss://audio-stream.example.com');
// Set up the audio stream
final stream = SoLoud.instance.setBufferStream(
bufferingType: BufferingType.released,
format: BufferType.opus,
onBuffering: (isBuffering, handle, time) {
// When isBuffering==true, the stream is set to paused automatically till
// it reaches bufferingTimeNeeds of audio data or until setDataIsEnded is called
// or maxBufferSizeBytes is reached. When isBuffering==false, the playback stream
// is resumed.
print('Buffering: $isBuffering, Time: $time');
},
);
// Start playback
final handle = await SoLoud.instance.play(stream);
// Listen for audio data
socket.listen((data) {
if (data is List<int>) {
SoLoud.instance.addAudioDataStream(
stream,
Uint8List.fromList(data),
);
}
});
Example: PCM Generation
@pragma('vm:entry-point')
Future<AudioSource> generatePCM() async {
final pcmStream = SoLoud.instance.setBufferStream(
maxBufferSizeBytes: 1024 * 1024,
format: BufferType.s16le,
channels: Channels.mono,
sampleRate: 44100,
);
// Generate some PCM data
final buffer = Int16List(44100);
for (var i = 0; i < buffer.length; i++) {
buffer[i] = (sin(2 * pi * 440 * i / 44100) * 32767).toInt();
}
// Add to stream
SoLoud.instance.addAudioDataStream(
pcmStream,
buffer.buffer.asUint8List(),
);
SoLoud.instance.setDataIsEnded(pcmStream);
return pcmStream;
}