Problem with Drawing a Line using QSGNode
I try to draw a plot-curve using QSGNode. I have 50000 points which I add to the geometry of the QSGNode and the drawing mode is
The problem is that the result looks like this:
Here a zoom of this:
The result also is not better if I use anti-aliasing by setting the samples of the window:
As you can see, there are white spaces in the line. However, the line should be continuous. In the above picture there should be around 10000 data points. If I zoom in a lot (probably until a segment spans several pixels) the problem disappears. What can I do to get a nice representation of my line? I would prefer not to down-sample my line, as I need to be able to zoom in.
Unfortunately line drawing isn't one of OpenGL's strong points. And the amount of overdrawing you're doing with what must be 10s-to-100s of line segments being crammed into each single pixel is going to be a problem too, breaking any reasonable assumptions HW or SW antialiasing might make. Projects I've worked on which thought they could use OpenGL's native line drawing all eventually migrated to an explicit triangle mesh based approach as a way of gaining control over visual quality (nice write-up here by someone also dealing with the issues). However I doubt that approach will help much with the extreme overdraw... thinking about how to do data reduction of some sort to a simpler geometry might be your best hope (this is generally called "level of detail reduction", and a common approach to reducing rendering artefacts across all sorts of primitives and textures... I've not seen much on doing it with line segments though. This might give some ideas).
Thank you for your answer. I don't understand why there are gaps. Without anti-aliasing, if every pixel which contains a line-segment is rendered in color then there should not be any gaps. If a very short line segment, e.g. 1/100th of a pixel does not give any colour to the pixel, then I should not see any line, as the data is evenly spaced.
If a very short line segment, e.g. 1/100th of a pixel does not give any colour to the pixel, then I should not see any line, as the data is evenly spaced.
But I think that's basically what's causing the gaps. If OpenGL is effectively rendering your teeny line segments by testing whether the pixel center intersects a 0.01x1 pixel quad, when the polyline changes direction you'll occasionally get a miss. If you look at the line joins in this image in that previously linked article and then consider that your lines are actually much much shorter than they are wide, then you can see you're going to have problems.
Regarding using a triangle mesh: I suppose one problem with this approach would be that if one zooms into the line, the lines made up of triangles get thicker. This is obviously not acceptable for a plot. In order to keep the lines the same width, I suppose I would need to update the geometry with every zoom action, which is probably a bad idea performance wise. Is there a different way to do this with a triangle mesh?
@maxwell31 With most of the level-of-detail reduction techniques I've seen, you prepare different static models for various power-of-two zoom levels (this is the computationally expensive bit), then have some computationally cheap way of blending between them as the view zooms (so there isn't an obvious jump when switching from one model to another). Use of MIP-maps for textures is the classic example, but it's also in widespread use for polygon models (you can render something in the distance with much less polygons and noone will notice) and there's a lot of visualisation literature on data reduction for 2D/3D point-clouds too. However I've not seen much on techniques for line segments and polylines myself. I'm guessing it's something GIS/mapping people must have had to deal with though.
BTW here's another nice example of someone running into problems with OpenGL line drawing. You're drawing narrower, shorter lines than them... but the gaps will still be there at sub-pixel level, and there's a chance the pixel is considered "missed" as a result.
Also, if restricting yourself to Nvidia HW is an option (and using native OpenGL calls), it might be worth taking a look at the NV path rendering extension. This arguably drags OpenGL's line-drawing/vector graphics support into the 21st century. Unfortunately it's so huge and complex it seems unlikely any other vendor will ever implement it. There was another vector graphics initiative called OpenVG (which seemed to have multivendor support) but it seems to have fizzled out unfortunately.
Was just pondering how I'd do decent line drawing myself, using "modern" OpenGL. Occurred to me that expanding/padding out line segments to pixel-scale in a zoom-aware geometry shader stage could be a viable technique (and efficient, with no preprocessing needed). Bit of googling led to people who'd had the same idea here and here.