Important: Please read the Qt Code of Conduct -

Is Qt capable of small-buffer low-latency audio applications (e.g. soft synth)?

  • I have been playing with / hacking the Spectrum example to create my own soft-synth. I have hit an issue that the audio glitches if I put the buffer size below about 5000 samples (125ms, at 44.1kHz sample rate). This is about 20x the length of buffer I want to achieve.

    I posted a question on StackOverflow and 2 respondents basically said that Qt is the wrong tool for low latency audio, even if I were to make the app multithreaded. I'd be grateful for some genuine insight from Qt developers and the Qt team.

    (NB: I'm not interested in 'my platform is better than yours' responses - I want to genuinely know if Qt is the right tool for the job here, from people who know Qt and understand non-blocking, high priority, low latency programming. I already love Qt for its QML + JavaScript GUI building side.)

  • Lifetime Qt Champion


    If you want to reach Qt developers you should rather post this question on the interest mailing list. You'll find there Qt's developers/maintainers (this forum is more user oriented)

    As for your original question, low latency is currently not in available with Qt but that doesn't mean you can't write your application with Qt. You can use a framework like PortAudio to handle the audio part of your application and write the rest with Qt (they play well together)

    Hope it helps

  • @SGaist Thanks for the heads up about the mailing list.

    Your comments about wrapping PortAudio echo the response on StackOverflow (suggesting the same thing but with RTAudio). It's useful to hear this confirmed.

  • Hi
    I also may confirm well cooperation Qt and RtAudio.
    I give up from QtMultimedia and with RtAudio I was able to get 64 frames buffer and callback in RtAudio thread where Qt signal was emitted.
    All together works flawlessly.
    Good luck!

  • Hi. Thank you both for your responses. I know that the audio will need to be kept in a separate high priority thread but I can't see how to integrate RTAudio/PortAudio with Qt and with this threading.

    Could you give some advice? Or is there sample code I could look at?

    Many thanks,

  • I will try, feel free to ask if I'm not clear enough.

    I suggest to use RtAudio - it works over PulseAudio/JACK/ALSA and Windows/Mac audio as well

    First of all try some simple RtAudio example from their site:

    Input audio data You will get through implementing call back function.
    This is static method, so to be able do something in your Qt class from there, create static instance member in your class:

    class MyAudioInput public QObject
        m_instance = this;
        m_buffer1 = new float[64];
        m_buffer2 = new float[64];
        m_currentBuffer = m_buffer1;
        m_readyBuffer = 0;
        m_posInBuffer = 0;
        m_thread = new QThread();
        connect(m_thread, &QThread::started, this, &MyAudioInput::process);
      delete m_buffer1;
      delete m_buffer2;
      m_instance = 0;
      delete m_thread;
    static MyAudioInput* instance() { return m_instance;}
    static int inputCallBack(void*, void *inBuffer, unsigned int nBufferFrames, double, RtAudioStreamStatus status, void*) {
      for(int i = 0; i < nBufferFrames; i++;) {
        // copy audio data to current buffer
        *(instance()->m_currentBuffer + instance()->m_posInBuffer) = *(inBuffer + i);
        if (instance()->m_posInBuffer == 64 { // switch buffers when full
          instance()->m_posInBuffer = 0;
          if (instance()->m_currentBuffer == instance()->m_buffer1) {
            instance()->m_currentBuffer = instance()->m_buffer2;
            instance()->m_readyBuffer = instance()->m_buffer1;
          } else {
            instance()->m_currentBuffer = instance()->m_buffer1;
            instance()->m_readyBuffer = instance()->m_buffer2;
        // start processing in separate thread
    void process() {
      // do something with m_readyBuffer here
      // or/and emit dataReady() to inform the rest of Your app
      void dataReady();
      static MyAudioInput* m_instance;
      float *m_buffer1, *m_buffer2;
      float *m_currentBuffer, *m_bufferReady;
      int m_posInBuffer;
      QThread *m_thread; // processing thread;

    Then You may open RtAudio stream like this:

      RtAudio adc;
      RtAudio::StreamParameters parameters;
      parameters.deviceId = adc.getDefaultInputDevice();
      parameters.nChannels = 2;
      parameters.firstChannel = 0;
      unsigned int sampleRate = 44100;
      unsigned int bufferFrames = 64; // 64 sample frames
      try {
        adc.openStream( NULL, &parameters, RTAUDIO_FLOAT32,
                        sampleRate, &bufferFrames, &MyAudioInput::inputCallBack );

    When signal dataReady() will be handled, m_readyBuffer points to data for processing and code on it has to be performed in 2ms, so probably it is better to have bigger buffer size or more buffers. I'm using 512 samples.
    But to get this working You have to add to this example some "real" things.

  • Thank you! That's an awesome help. I shall take a look at the RTAudio examples and play with the sample code you've posted.

  • Lifetime Qt Champion

    Depending on your application design, you might want to consider implementing a custom QIODevice for easier integration with Qt's API (see QAudioInput/QAudioOutput)

  • In my experience with QtMultimedia I found it a bit unpredictable.
    If one set audio buffer QAudioInput::setBufferSize(someValue) it will never be set to desired value. And value forced by QtMultimedia is quite big as mentioned in the first post.

    When one get this value after device opening QAudioInput::bufferSize() it is good for nothing - when one expecting to obtain the 'buffer size' of data in readyRead() signal.
    Usually every emit of readyRead() gives different amount of audio data.

    Still, it is possible to manage all of above, but with RtAudio one gets every call with data amount of exactly declared buffer size value. Rt = Real Time and it is.
    Simply - it is much easier to work with it.

    Anyway, I don't mean to deny QtMutlimedia. I came back to it under Android and it is bearable...
    I made a blog-post how to manage QIODevice with audio output:
    I hope it may be usable for someone.

  • Lifetime Qt Champion

    I just meant QAudioInput as an example of API not necessarily to use it. i.e. You get a QIODevice or give a QIODevice when you want to get the audio data and then you can use the usual Qt APIs to get notified and read the data.

  • @SGaist After I posted, I thought the same:
    It would be cool thing to have RtAudio wrapper by QIODevice
    ...but I'm too small yet to do such a thing :-)

  • Lifetime Qt Champion

    Why too small ?

  • @SGaist Well...
    I wrote about my solutions to combine Qt and RtAudio and it just works but it is a bit of head breaking for me how to manage this with QIODevice.
    Maybe after some time

  • Hi @paulmasri, sorry if it is not relevant any more but curious, why you wanted to decrease the buffer size?
    As I understood, if the buffer size is small enough, the backend will play faster than it is filled by new data, which produces glitches/pops/noise.

    Also I read the post from SO: "In practice I find I get glitches if the buffer is less than around 100ms. That's way too long for good responsiveness."

    Could you please explain why it is too long? Thanks

  • @VaL-Doroshchuk
    "big buffer" is quite relative and that "big" depends on needs.
    In my case I'm using audio data to pitch recognition and depending on pitch range to recognize, I'm using 512, 1024 or 2048 samples buffer, so it is apparently 11, 23 and 45ms (more less). Those power-of-two buffer sizes are required by FFT routines.
    But my app is flexible enough, and when underlaying OS is not capable to keep that buffer size it will portion any audio data size as needed for pitch detection algorithm.
    (on Android Qt Audio is used and with low level device a delay between data ready call could sometimes be about 100ms)
    However smaller buffer gives faster app response (displaying pitch in score).
    And no any glitches was notices even if buffer is set to 64 frames (2ms)

    And if we take simpler example, which could be passing input mike data to an output device - more quicker we send incoming data to the out - less delay will be in speakers - it means less buffer - faster response.

Log in to reply