Important: Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

Memory overflow with QT 5.1X and Intel GPU graphic



  • Hello everyone,

    We actually use QT 5.1X on Intel NUC with

    • Intel(R) Core(TM) i7-8559U CPU @ 2.70GHz

    • Intel(R) HD Graphics (Coffeelake 3x8 GT3)

    • OpenGL ES 3.2 Mesa 17.2.3

    The app display pictures using QML image: and QQuickImageProvider on 2 windows.
    We are currently experiencing a big problem of memory consumption.(All memory is consume)

    Has anyone ever encountered this problem with the Intel HD Graphics ?

    Your help is really welcome !
    Thanks !


  • Lifetime Qt Champion

    Hi,

    Qt 5.1X is a bit vague, what exact version are you using ?
    Can you show the code you are using ?
    Did you try updating/downgrading your graphic card driver ?



  • Hello, the versions :

    Version : QT 5.10.1-0-201802092256
    CentOS Linux release 7.5.1804
    OpenGL version string: 3.0 Mesa 17.2.3
    00:02.0 VGA compatible controller: Intel Corporation Device 3ea5 (rev 01)

    OpenGL and Intel GPU seems to be strongly linked.
    The intel driver goes with the Centos version.


  • Lifetime Qt Champion

    Can you check whether you have the same phenomenon with a more recent version of Qt ? And if not, with an older one ?



  • Hello,

    We have same problem with QT 5.8, Qt 5.11.0, QT5.11.1 in 2 screens and 2 windows

    With 1 screen and 1 windows not problem.


  • Lifetime Qt Champion

    Is it not a problem or is the leak taking way longer to eat all the memory ?



  • It is a big problem.
    On2 screens only, each time a image is updated the memory is consume and not freed.
    After 1 minutes 32 Go of mem are consume and after the swap and computer can not run.


  • Lifetime Qt Champion

    What I meant was: is it really "not a problem" with only 1 screen and 1 window or is it just the memory leak that is way slower in that setup ?



  • With only 1 screen and 1 windows we don't see memory leak.


Log in to reply