Skip to content

Game Development

What can we say - we like games. And you can use Qt to write some. Questions? Ask here.
837 Topics 4.0k Posts
  • Pause and Resume a QTimer

    8
    1 Votes
    8 Posts
    2k Views
    B

    @Chris-Kawa You know what I just realized... the QTimer::singleShot method doesn't even work because you won't know the remaining time if you pause again during the single shot.

  • 0 Votes
    2 Posts
    262 Views
    uaniU

    appears to be a single precision floating point issue, as used by Qt3D internally

  • 0 Votes
    4 Posts
    433 Views
    uaniU

    appears to have been due to single precision floating point used by Qt3D internally.

  • QCameraLens fieldOfView is ... geometrically in Qt :

    Solved
    1
    0 Votes
    1 Posts
    236 Views
    No one has replied
  • Bad performance cube example

    Unsolved
    4
    0 Votes
    4 Posts
    513 Views
    P

    Ok, so I've added v-sync by using QOpenGLWindow and its frameSwapped signal. Some good news and some bad.

    V-sync solved one issue. At first with v-sync was still measuring many delays, but I found that some of those were actually measurement errors. QDateTime::currentMSecsSinceEpoch() is not accurate enough, I'm using QElapsedTimer now. Also the code I used to test somehow interfered with the measurement. When I just printed qDebug() << elapsedTimer.elapsed() - lastUpdateRotation; without branch, and did the analysis in excel, I found that almost all frames took 16-17 ms, so exactly 60fps :)

    It didn't solve the frame skips on the RTX 2060. I'm getting none on the surface's iGPU (ok, I got a few above 20, but all of these were still below 30ms), but for some reason the desktop with beefy GPU still skips 0.7% of frames. And these are above 40ms, so really noticeable, max is 100ms.

    I'm measuring on the frameSwapped signal now, so I can't blame the qt event loop for doing something between frameSwapped and paint. It might be something between the driver and GPU, but I'm still somewhat lost on this part.

  • QScreenRayCaster->addLayer(...) has no effect

    Unsolved
    1
    0 Votes
    1 Posts
    279 Views
    No one has replied
  • QtWheelEvent* in signal from handler: should I delete it?

    Solved
    2
    0 Votes
    2 Posts
    260 Views
    jsulmJ

    @uani No, you do not have to delete the events - those are handled by Qt.

  • 0 Votes
    3 Posts
    303 Views
    uaniU

    @e2002e yes, this is how the mouse pointer y coordinate is delivered but when I use it to screenraycast it hits on the opposite vertical side of the entity.

    I subtract the mouse pointer y from the viewport height now to have a QScreenRayCast hit as expected.

  • 0 Votes
    5 Posts
    449 Views
    uaniU

    oh, i'm sorry, i was referring to https://forum.qt.io/topic/140839/qtexturematerial-error-unable-to-find-suitable-texture-unit-for-diffusetexture-ogl-sampler-diffusetexture-wasn-t-set-on-material-rendering-might-not-work-as-expected-dx by "texture issue" where I posted about an issue almost too embarassing to mention it again. This one here I would contribute more research to but i already used more time than i envisioned for my project and i desired to prorgress that.

  • 0 Votes
    2 Posts
    652 Views
    uaniU

    my bad:

    ((Qt3DExtras::QTextureMaterial)material).setTexture(t);

    should have been

    ((Qt3DExtras::QTextureMaterial*)material)->setTexture(t);

    :facepalm:

    i'm sorry for bothering you.

    However the "status changed" signal i'm still not receiving.

  • 0 Votes
    4 Posts
    411 Views
    Chris KawaC

    I mean MSVC as the compiler

    Ok, so you asked why you can't use it.
    Well, you can.

  • Troubles with Arduino to Qt.

    Unsolved
    2
    0 Votes
    2 Posts
    252 Views
    SGaistS

    Hi and welcome to devnet,

    One way could be to use the serial port of your Arduino to communicate with the computer.

  • 0 Votes
    5 Posts
    854 Views
    M

    @JoeCFD thank you Joe,
    I found the solution on github
    https://github.com/jonaias/DynamicFontSizeWidgets

  • 0 Votes
    2 Posts
    251 Views
    SGaistS

    Hi,

    Did you try to use qWaitForWindowExposed ?

  • Do we still need to call XFlush in Qt 6?

    Unsolved
    4
    0 Votes
    4 Posts
    416 Views
    J.HilkJ

    @mhn2 afaik Q_WS_X11 was a Qt4 define, and was replaced by the much better Qt Platform Abstraction system.
    https://doc.qt.io/qt-6/qpa.html

    the x11 support still exists, you find that now here:
    https://doc.qt.io/qt-6/linux.html

  • 0 Votes
    2 Posts
    298 Views
    8Observer88

    I should use DrawPolygon to draw segments of colliders when I use boxes to draw borders around game objects. DrawSegment() will be called when an instance of b2EdgeShape is created:

    b2EdgeShape edgeShape; edgeShape.SetOneSided(b2Vec2(0.f, 0.f), b2Vec2(1.f, 0.f), b2Vec2(2.f, 0.f), b2Vec2(3.f, 0.f)); m_pEdgeBody = m_pWorld->CreateBody(&bdef); m_pEdgeBody->CreateFixture(&edgeShape, 2.f);
  • 1 Votes
    4 Posts
    341 Views
    8Observer88

    Usually, this implies some sort of pointer problem.

    @newQOpenGLWidget thank you! You set a direction for me. I forgot to get the uMvpMatrix location in the ColliderEdge class. I will publish a simple example in the next time when I ask a question.

    Note. This answer is from StackOverflow

  • 0 Votes
    4 Posts
    357 Views
    Chris KawaC

    That last one -bloom is a postprocess effect and it has nothing to do with shadows. Also there's no raytracing involved in the shadow technique described in the previous link.

    In the shadow-mapping technique described there you render the scene multiple times.
    For each light you set up a model-view-projection matrix to match the light source point of view i.e. you render the scene from the position and direction of your spotlight, so that it's a circle that fills the framebuffer. Each light gets its own depth buffer like that. You don't need a color buffer in that pass, only the depth information is what's of interest here. This basically "encodes" the distance of each object to each light in the depth buffer. That depth buffer is called a shadowmap.

    Then, in the second pass, you bind these shadowmaps as input and when you draw your scene to the actual framebuffer you reproject the 3d position of the pixel you're currently drawing to the light space coordinates and compare the resulting depth value with that in the shadowmap. If it's smaller then pixel is in the direct light. If it's larger it's in the shadow.

    For the shadowmap you use a square depth texture and as for the size of it games usually have shadow quality preset for that e.g. low/medium/high. The bigger the texture the less jaggy the shadows will be but also it's gonna be slower because there's more depth information to write. Games usually use shadowmaps somewhere in the range 512x512 to 2048x2048 (remember that power of two sized textures are much faster on the GPU). Some types of games can get away with smaller ones, but you have to experiment what looks good for you.

    As for when are the shadowmaps rendered - it depends. If your scene is static and your lights don't move you can render them once at the start and reuse. If you want the lights to move or have moving objects in the scene you'll need to re-render shadowmaps more often.

    Games usually do multiple different optimizations to make the shadowmaps rendering less costly. First of all limit the maximum number of dynamic light sources i.e. render shadows only for a couple of the closest lights. Detect if anything in the light cone changed (i.e. something moved into or out of the light cone) and only re-render that shadowmap then. Limit the amount of moving lights. Render only the most important objects into the shadowmaps and don't include those that contribute little to the whole image.
    But these are optimizations for later. For starters I suggest to try and do a single closest light shadowmap and render whole scene to it so you can learn how to do it.

  • how to transfer data from one interface to another

    Unsolved
    5
    0 Votes
    5 Posts
    443 Views
    D

    Set layout to widget that is showing.
    Then add qlinedit to the layout using add widget

  • How is Lighting determined?

    Solved
    6
    0 Votes
    6 Posts
    500 Views
    Chris KawaC

    Like if I used phong glsl type of shading does that automatically use the bdrf rendering?

    The method of rendering and a BRDF selection are two separate topics. All of them can use the same BRDF. To make it super simple BRDF is the equation that you have in your vertex/pixel shader for calculating a color. The rendering method is about how you get to invoke those shaders.

    The first method is slow for certain scenarios, like tons of lights but is superb for others. Modern game engines usually have a mix of 2 or all 3 of these methods for different types of objects. For example you could have deferred lighting as a base method for most solid objects, forward pass for translucent stuff like glass and raytracing for global illumination (indirect lighting). Up to about a decade ago pretty much all popular engines used mostly forward renderers but nowadays everything is a hybrid of multiple techniques.

    But all that can be a bit overwhelming, so don't try to jump on all of these at once. Pick one and implement that for a start.

    how do I select which rendering style?

    It's like choosing bubble sort or quick sort. They are just different algorithms suitable for different tasks, so you just pick one that is best for your situation. You can use both if you want.

    Forward renderer is the easier one. For each light you set its properties as shader constants and just draw each object with a shader that implements e.g. Phong's equation using these constants. You draw that directly to the framebuffer (assuming you don't want to do any postprocessing later like blur, bloom, chromatic aberration etc.).

    Deferred approach is a two-pass algorithm on the other hand. In the first pass you set up a framebuffer with multiple attachments, one texture for each element of a g-buffer. You render each object and a shader writes to these buffers information about the objects' materials - normals, color, roughness etc. In the second pass you take these textures and set them up as multitexture input, bind another buffer containing light information like positions, color, falloff etc. and then do a draw call with that. In that pass you use the same BRDF as in the forward rendering but he difference is that object information is encoded in the g-buffer textures and not as vertex attributes of individual draws. The second pass goes to the single output framebuffer (again assuming no postprocess passes).
    Here's a tutorial on that technique: Deferred Shading.

    Shadows are yet another topic and I wouldn't try to do them until you have a grasp of basic lighting. In forward and deferred rendering they are an effect separate from lighting. Yes, I know, it sounds weird, but shadows are not part of lighting in those techniques. They are calculated as an entirely separate pass using techniques like shadow mapping, cascading maps, PCF, VSM, PCSS or others. For example some good explanation of VSM technique is shown in one of the GPU Gems books I mentioned: Summed-Area Variance Shadow Maps. But again - I wouldn't worry about shadows just yet. Do basic direct lighting first and get comfortable with it before you move on.

    In raytracing shadows show up kinda naturally and automatically, because this technique is the closest to what real light does. Shadows are just places where the rays can't reach. Raytracing is a bit more advanced technique and I wouldn't start with it though, especially since it is now hardware accelerated on many GPUs, but OpenGL has no direct API support for it, so Vulcan or DirectX12 are better suited for that task..