Important: Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

Advice for development plan of a new Qt application.



  • Hello, this is my first post here and have to say how excited I am to build a new piece of software in Qt. I plan to keep it simple to start with but have some pretty big long term goals and am hoping to get a bit of feedback on specifying a development plan that will be scalable going forward in the future.

    I plan on building a sort of sandbox for real-time/interactive image and data processing. Basically a system that can load custom plugins that ingest data (image, video, sound, motion sensors, depth sensors, web API sources etc..) process that data, pass the processed data onto other plugins that process and format that data further that then passes their results onto device output plugins (modules that would output to displays, APIs, robotics etc...)

    A few of the basic requirements are:

    • UI must not block data processors

    • various data processors must not block each other if they run in parallel or if they are pulling from a feed that is blocking

    • must be able to monitor data in real-time (at all stages - input, processing, output) ie switch monitoring sources

    • need to be able adjust various parameters of data processors in real-time

    • need to transfer graphics data from one context to another

    • initially will only need to run on Windows but ideally it can be run on other systems such as mobile (in a limited capacity) and Linux in the future.

    I've done some experimentation with both QtWidgets and QML and am now leaning towards a QML based UI implemented as part of an MVC program structure. I think QtQuick is the better soluiton (especially with any mobile plans) but just looking to hear some thoughts of some Qt veterans out there before going too far down any path.

    Will QtQuick controls run fast? Can I display multiple contexts at the same time without a huge performance on impact? Can I output streams of floating point data and text to a display efficiently? My biggest concern is to build a system where the UI does not block the real-time data processors, my second biggest concern is that the UI is fast enough that it isn't sluggish and can display a relatively accurate representation of what is going on at a decent framerate.

    Any thoughts would be greatly appreciated!

    thanks
    Keith


  • Moderators

    Hi @keithlostracco, and welcome!

    First, could you tell us more about your software development experience? What languages and frameworks are you most familiar with (if any)?

    Your requirements about "non-blocking" are generally covered by using a multithreaded architecture for your program.

    Qt Quick is best used for the UI. However, you might still want to implement resource-intensive and performance-critical parts of your program in C++.

    Things like speed/performance/framerate cannot be meaningfully addressed until you provide more specific details like:

    • What is the maxium data rate that your system must handle?
    • What do you want to display? Is it just text, graphs, and small images, or 4K 60 fps video streams too?
    • What kind of hardware and operating systems (including their versions) will you support? A program that runs fast on a Google Pixel 3 might be sluggish on a Google Nexus One (or it might not even run at all)


  • Hi @JKSH thanks for the reply!

    To answer a few of your questions:

    First, could you tell us more about your software development experience? What languages and frameworks are you most familiar with (if any)?

    • My biggest skill set is developing software components in a realtime data and graphics processing environment called TouchDesigner. TD is a graphic node based environment (OpenGL based) that leverages Python, GLSL and custom C++ operators to extend it's basic functionality. I've built some pretty large tools with it, such as a realtime performance and interactive media server called Luminosity that is able to run individual instances of itself each bound to a single GPU either on the same machine or across multiple machines - that can play in sync very large canvases of video across many display devices. ie. a system running 20+ projectors, connected to 4 machines each with 3 GPUs playing back and generating multiple layers of 16k x 8k+ content would not be an unreasonable task for it. I've also built a number of GLSL based raytracing, fluid and particle dynamics solvers.

    • I would say I'm quite fluent in Python, GLSL, and mathematics/algorithms in relation to graphics, I have a decent handle in a C++ environment although I'm not quite as fluent as I am in Python.

    Qt Quick is best used for the UI. However, you might still want to implement resource-intensive and performance-critical parts of your program in C++.

    • Yes the plan was to use Qt Quick in it's own thread as UI only, then levarage C++ for all internal processing.

    What is the maximum data rate that your system must handle?

    • typically the hardware will be the ultimate bottleneck, it is not unreasonable to max out the fastest GPU/CPU currently available for a given project. Usually we scale resolution per node in order maintain 60 fps. This could be 8k x 4k video in a single process if we're simply playing back a single file or as low as 1280x720 if it is some sort of generative process.

    What do you want to display? Is it just text, graphs, and small images, or 4K 60 fps video streams too?

    • The UI itself would need to display:
      • Control panels (text fields, sliders, knobs, buttons etc...)
      • data streams - ie streams of floating point values either as arrays of floats displayed as text or as line graphs (the values will update every frame or at least as fast as the thread that own them produces them)
      • Video monitoring of processes rendering in the background. There will be a few different video processing type plugins
        • An OpenCV type that we will have full control over and will output a cv::Mat (CPU) that can be scaled before being copied to GPU in Quick OpenGL context.
        • A TouchDesigner plugin that will be loaded using provided plugin SDK. The TouchDesigner component will support texture sharing in both OpenGL and DirectX hosts.
        • CUDA/OpenCL base image plugins
        • Python/Kera/Tensorflow/OpenCV(python) type plugin for processing various data sources
        • final video output will likely be handled by TouchEngine (the TouchDesigner plugin SDK) which will have not have a user interface but simply be full screen images bound to particular GPUs/displays the TouchEngine plugin is running.

    What kind of hardware and operating systems (including their versions) will you support?

    • Initially this will be strictly a Windows application but eventually (after the prototype is built and more developers are working on the project) I would like to be able to create a build that will run on Linux (no TE functionality but OpenCV) and mobile builds with much more limited capabilities likely connected to a server doing all the heavy lifting.

Log in to reply