Important: Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

Cannot display hw accelerated h264 frames in a custom VideoWidgetSurface



  • Hi everybody,
    i'm trying to display hardware decoded h264 frames on my custom Video Widget Surface using QMediaPlayer. I'm using gstreamer 1.0 as backend . All gstreamer plugins installed (bad, ugly, good, omx 1.16.1).

    Hardware: raspberry pi 3 b+
    Qt: Qt version 5.13.2
    Platform Mode: eglfs

    When i play the media file on command line its plays smoothly, using gpu to decode. The GStreamer pipeline dot file seems ok .The omxh264dec outputs video/x-raw (memory:GLMemory) format RGBA as expected.

    gst-play-1.0  ~/simpsons.mp4
    

    gst-play_omxh264dec.png

    But when play the same media file from my application its shows nothing and GStreamer Pipeline dot file looks different. The omxh264dec outputs video/x-raw in other format (I420).
    myapplication_omxh264dec.png

    I'm stuck on this. I think maybe a opengl sink is missing. Is something like qtsink the right thing to fit the gap and delivery the frame to my custom Video Widget Surface ?

    Thanks in advance.
    Stay Safe.



  • AFAIK, you can specify an own custom pipeline in QtMM now (look on documentation). Also, as an option, try same in QML instead of Qt widgets. Also try to upgrade Qt to 5.14.x.

    PS: IMHO, QtMM is dangerous thing! :)



  • @kuzulis, Thanks for the tip. I've upgraded to Qt 5.14.2, hoping the support to the memory:GLMemory sink capability coded in qgstvideorenderersink part of Qt Multimeida gsttools, as shown below, would work but didn't.

    GstCaps *QGstDefaultVideoRenderer::getCaps(QAbstractVideoSurface *surface)
    {
    #if QT_CONFIG(gstreamer_gl)
        if (QGstUtils::useOpenGL()) {
            m_handleType = QAbstractVideoBuffer::GLTextureHandle;
            auto formats = surface->supportedPixelFormats(m_handleType);
            // Even if the surface does not support gl textures,
            // glupload will be added to the pipeline and GLMemory will be requested.
            // This will lead to upload data to gl textures
            // and download it when the buffer will be used within rendering.
            if (formats.isEmpty()) {
                m_handleType = QAbstractVideoBuffer::NoHandle;
                formats = surface->supportedPixelFormats(m_handleType);
            }
    
            GstCaps *caps = QGstUtils::capsForFormats(formats);
            for (guint i = 0; i < gst_caps_get_size(caps); ++i)
                gst_caps_set_features(caps, i, gst_caps_features_from_string("memory:GLMemory"));
    
            return caps;
        }
    #endif
        return QGstUtils::capsForFormats(surface->supportedPixelFormats(QAbstractVideoBuffer::NoHandle));
    }
    

    If the above code execute as expected (gstreamer_gl enabled) the element qtvideosink would provide sink pad with video/x-raw(memory:GLMemory), format: (All supported formats returned by QAbstractVideoSurface::supportedPixelFormats). Am i right? I've also checked my qtmultimedia config header file (qtmultimedia-config_p.h) in /usr/include/QtMultimedia/5.14.2/QtMultimedia/private folder.

    #define QT_FEATURE_alsa -1
    #define QT_FEATURE_directshow -1
    #define QT_FEATURE_evr -1
    #define QT_FEATURE_gpu_vivante -1
    #define QT_FEATURE_gstreamer_1_0 1
    #define QT_FEATURE_gstreamer 1
    #define QT_FEATURE_gstreamer_0_10 -1
    #define QT_FEATURE_gstreamer_app 1
    #define QT_FEATURE_gstreamer_encodingprofiles 1
    #define QT_FEATURE_gstreamer_gl 1
    #define QT_FEATURE_gstreamer_imxcommon -1
    #define QT_FEATURE_gstreamer_photography 1
    #define QT_FEATURE_linux_v4l 1
    #define QT_FEATURE_openal -1
    #define QT_FEATURE_pulseaudio -1
    #define QT_FEATURE_resourcepolicy -1
    #define QT_FEATURE_wasapi -1
    #define QT_FEATURE_wmf -1
    #define QT_FEATURE_wmsdk -1
    #define QT_FEATURE_wshellitem -1
    

    According to qmediaplayer documentation "If QAbstractVideoSurface is used as the video output, qtvideosink can be used as a video sink element directly in the pipeline. After that the surface will receive the video frames in QAbstractVideoSurface::present()." The only requirement is that QAbstractVideoSurface::supportedPixelFormats return QVideoFrame::Format_YUYV.

    If i call QMediaPlayer::setMedia without passing a custom pipeline i got the result as exposed on the first post. Although, if i call the same method forcing omxh264dec element to use the video/x-raw(memory:GLMemory) src pad, as code below, i got error saying it can't connect both elements.

    m_mediaPlayer.setMedia(QUrl("gst-pipeline:  qtdemux ! h264parse ! queue ! omxh264dec ! video/x-raw(memory:GLMemory) ! qtvideosink"), buffer);
    

    The error is:

    Error: " qtdemux ! h264parse ! queue ! omxh264dec ! video/x-raw(memory:GLMemory) ! qtvideosink" : "could not link omxh264dec-omxh264dec0 to qgstvideorenderersink1, qgstvideorenderersink1 can't handle caps video/x-raw(memory:GLMemory)"

    Unfortunately, qml is not a option by now. I rely on Qt Widgets. I don't know if somebody else has already faced the same issue. On Desktop the code execute perfectly without setting custom pipeline.

    Thanks for attention



  • Hi all,
    in case some else face the same issue, i have managed to pass the decoded frame trough gstreamer pipeline by disabling sync on Qt default pipeline hard coded in gsttools inside qtmultimedia module and by enabling Gstreamer OpenGL Support setting environment variable to QT_GSTREAMER_USE_OPENGL_PLUGIN=1 .

    Now i can play decoded frames . But another problem emerge from that, when playing small resolution videos its plays smoothly, on the other hand playing >= 720p takes much cpu time due copying frame from gpu to cpu memory when calling QVideoFrame::map method.

    So, does exist a faster way to to grab the frames without taking so much time ?

    I saw a Video Core Shared Memory, but i simply can get it to work, my vcsm buffer keeps empty during all the rendering time. Does the Texture Id passed is valid, is it empty ?

    on my QAbstractVideoSurface::start method i initialize vcsm and create a EGLmageKHR.

    ...
       int w = Util::nextPOT(size.width());
       int h = Util::nextPOT(size.height());
    
        vcsm_info.width = w;
        vcsm_info.height = h;
        vcsm_init();
    
    
        eglFbImage = eglCreateImageKHR(eglGetCurrentDisplay(), EGL_NO_CONTEXT, EGL_IMAGE_BRCM_VCSM, &vcsm_info, NULL);
    
    
        if (eglFbImage == EGL_NO_IMAGE_KHR || vcsm_info.vcsm_handle == 0) {
            qCDebug(LOG_RESERV) << QString("%1: Failed to create EGL VCSM image\n").arg(VCOS_FUNCTION);
        }else{
            qCDebug(LOG_RESERV) << QString(" *********** VCSM Image created with %1 %2 ").arg(w).arg(h);
        }
    ...
    

    Then later on my QAbstractVideoSurface::present method i try to grab the passed Texture content, but does work at all :(.

    QOpenGLFunctions* f = ctx->functions();
    GLuint    framebuffer;
    GLuint    depthRenderbuffer;
    GLint prevFbo;
    GLenum status = GL_FRAMEBUFFER_COMPLETE;
    GLuint texture = static_cast<GLuint>( currentFrame.handle().toInt() );
    
    int texWidth = Util::nextPOT(currentFrame.width());
    int texHeight = Util::nextPOT(currentFrame.height());
    
    
    
    GLCHK(f->glGetIntegerv( GL_FRAMEBUFFER_BINDING, &prevFbo ));
    GLCHK(f->glGenFramebuffers(1, &framebuffer));
    GLCHK(f->glBindFramebuffer(GL_FRAMEBUFFER, framebuffer));
    GLCHK(glActiveTexture(GL_TEXTURE0));
    GLCHK(glBindTexture(GL_TEXTURE_2D, texture));
    GLCHK(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST));
    GLCHK(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST));
    GLCHK(glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, eglFbImage));
    
    GLCHK(f->glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0));
    
    GLCHK(glBindTexture(GL_TEXTURE_2D, 0));
    status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if(status != GL_FRAMEBUFFER_COMPLETE) {
      qCDebug(LOG_RESERV) <<"Problem with OpenGL framebuffer after specifying color render buffer: " << status;
    } else {
     qCDebug(LOG_RESERV) << "FBO creation succedded";
    }
    
    GLCHK(f->glFinish());
    
    uint8_t *vcsmBuffer;
    VCSM_CACHE_TYPE_T cacheType;
    
    vcsmBuffer = (uint8_t*)vcsm_lock_cache(vcsm_info.vcsm_handle, VCSM_CACHE_TYPE_HOST, &cacheType);
    
    if (!vcsmBuffer){
          qCDebug(LOG_RESERV) << "Failed to lock VCSM buffer!";
    }else{
          unsigned char *line_start = (unsigned char *) vcsmBuffer;
          for (int i = 0; i < texHeight; i++) {
                 QString s;
                 QString result = "";
                  for(int j=0; j < texWidth; j++){
                         s = QString("%1").arg(line_start[j],0,16);
                         result.append(s);
                  }
                 qCDebug(LOG_RESERV) << result;
                 line_start += texWidth;
           }
     }
    vcsm_unlock_ptr(vcsmBuffer);
    ....
    
    

    All OpenGL calls succeed but vcsmBuffer keeps empty.

    If some one could give any tip i will be grateful.
    Thanks in adavce.


Log in to reply