Unsolved Problem moving a Qt Open GL program from Windows 7 to Surface Pro running Windows 10
-
I have been writing a Qt application that uses OpenGL. (I have been using Visual Studio). Thing have been going okay and all the development has taken place on my laptop, running Windows 7. However the real target is a Microsoft Surface Pro 4, running Windows 10.
So, I have a program that runs fine on my laptop, but totally crashes on the Surface Pro.I am using Qt5.9 and running OpenGL 4.4.0 (build 20.19.15.4326) on the laptop
and OpenGL 4.4.0 (build 20.19.15.4463) on the Surface Pro(I don't have a development environment on the Surface, so I can't set break points. I have been able to zero in on my problem, but don't know how to solve it.)
There are times during the program, I need to display a different collection of 'points' via openGL.
The function that accepts the new 'point' data looks like this:
(mPointCloudVbo is my QOpenGLBuffer)
{ mPointCloudVbo.release(); if (mPointCloudVbo.bind()) { ....print a message (this path seems to always be taken) } else { ....print a message } mPointCloudMappedMemory = mPointCloudVbo.mapRange(0, (pointsAdded * sizeof(GLfloat)), QOpenGLBuffer::RangeWrite); { ....print the address returned to me } if (NULL == mPointCloudMappedMemory) { yikes(); } else { // // the pointer is not NULL...so use it // get the points, one at a time and write it to the address // mapRange returned to me // unsigned int index; for (index = 0; index < pointsAdded; index++) { GLfloat tempValue; tempValue = mQtPointCloud.data()[index]; *(static_cast<GLfloat *> (mPointCloudMappedMemory)) = tempValue; mPointCloudMappedMemory = (void *) ((unsigned int)mPointCloudMappedMemory + sizeof(GLfloat)); } } if (mPointCloudVbo.unmap()) { ....print a message (this path seems to always be taken) } else { ....print a message }
This program works fine on my Laptop (running Windows 7) and crashes on the Surface Pro (running Windows 10) when I try to access the memory mapRange returned to me.
The two questions I have is: 1) am I doing something wrong, and 2) is there some other way to update the contents of the Vertex Buffer Object other that what I am doing.
Thanks for reading this.
-
@Bradzo
you can also edit a post of yours ;) -
@Bradzo said in Problem moving a Qt Open GL program from Windows 7 to Surface Pro running Windows 10:
*(static_cast<GLfloat *> (mPointCloudMappedMemory)) = tempValue;
mPointCloudMappedMemory = (void *) ((unsigned int)mPointCloudMappedMemory + sizeof(GLfloat));Hi, I pretty much guarantee you are crashing in these lines. Working with memory like this is incredibly dangerous, as you are finding out. The differences in platforms, bit depth, etc will cause access violations here.
My guess here (without a debugger) is that you are going from 64-bit to 32-bit or vice versa. That will make unsigned int a different size than you are expecting, and potentially GLfloat as well. If either of these are off then you have your crash.
Is there a reason you are using memory like this? It is a very old C style way of doing things from the 90s. Not something you see a lot of any more due to the fact that it is very unsafe. It's not like you can't do this, and in some cases I still use this method, but you have to be super careful and fully understand the implications of the casts and how sizeof() works. Right off I see you casting a pointer to unsigned int, which will be 32-bit, so if you are on a 64-bit platform you are 4 bytes too short on your memory address.
-
@ambershark
Thanks for taking the time to answering. Yeah, exactly, that is where it is crashing. That much I know. However, the development computer, the Surface Pro and the compiler are all 64 bits.The reason I am doing this is that this is an OpenGL program. I have asked OpenGL to create a Vertex Buffer (Object) for me. It isn't created in the programs address space, but rather it is in the address space of the Graphic Pipeline (the GPU). In order to get access to it you ask OpenGL to map it to the CPU address space via the call:
mPointCloudVbo.mapRange(0, (pointsAdded * sizeof(GLfloat)), QOpenGLBuffer::RangeWrite);
This function returns a pointer to void (void *), so that is what I get to work with.
I have seen this done in several examples of OpenGL code, both on-line and in some OpenGL books I have. It works fine on several desktop/laptop machines around the office, but not on the Surface Pro. I have been researching other ways to structure the program to get the same results.
Thanks again for your time.
-
@Bradzo Ah that makes sense. I haven't done any direct opengl programming before.
It's good to know both platforms are 64-bit but I still feel like there is an issue with the math on the memory. It's possible it could be something else but since it works on some platforms and not others just screams sizing issue to me.
Can you throw it in a debugger on the surface pro? Remote debugging? Something that lets you see that memory? I'm not familiar with the surface pro (not much of a windows guy) is it potentially a big endian processor and your desktops would be little endian? This could cause that crash as well.
-
Hello!
First of all: I experienced buffer mapping to be a complete pain performance-wise for many graphics cards, as it involves a lot of synchronisation. So may I introduce you to the classic way of updating a vertex buffer?
Initialization:
m_vertexBuffer = new QOpenGLBuffer(QOpenGLBuffer::VertexBuffer); if (!m_vertexBuffer->create() ||!m_vertexBuffer->bind()) { // error handling } m_vertexBuffer->setUsagePattern(QOpenGLBuffer::DynamicDraw); m_vertexBuffer->allocate(sizeof(float) * vertexStride * vertexCount); m_vertexBuffer->release();
Update:
if (m_needsUpdate) { m_vertexBuffer->write(0, pointerToVertexData, sizeof(float) * vertexStride * vertexCount); m_needsUpdate = false; }
Render:
m_vertexArray->bind(); m_vertexBuffer->bind(); // bind other stuff / modify shader program // update vertex data if needed, look above // now we have to enable the vertex attrib arrays and bind them to shader IN variables // we now assume your vertex data looks like this: // { x, y, z }, { x, y, z } ... in the shader program: "layout(location = 0) in vec3 pos;" gl->glEnableVertexAttribArray(0); // location of vec3 pos gl->glVertexAttribPointer(0 /* loc */, 3 /* length */, GL_FLOAT, GL_FALSE, 3 /* stride */, nullptr /* start of first element of triple (is nullptr since we only have one) */); gl->glDrawArrays(GL_POINTS, GL_ZERO /* we have a vertex array, so we dont need this */, vertexCount);
If you have any questions, feel free to ask me!
--
It might also be useful if you showed us your shader program. Many NVIDIA graphics cards use EGL (GLES) as backend and won't compile shader programs that define a version attribute that is anything else than "#version 100" and "#version 300 es" :). -
This post is deleted!