in qt3d if my file is larger than 2G ,upto 4G,how to handle this?
-
QByteArray m_NORBuffer;
.....
m_positionAttr->setName(Qt3DRender::QAttribute::defaultPositionAttributeName());
m_positionAttr->setVertexBaseType(Qt3DRender::QAttribute::Float);
m_positionAttr->setDataSize(3);
m_positionAttr->setAttributeType(Qt3DRender::QAttribute::VertexAttribute);
m_positionAttr->setBuffer(m_NORBuffer);
m_positionAttr->setCount(vsize);
m_positionAttr->setByteStride(stride);
above is my snippet, cos the size of QByteArray is just INT ,if vertex data is larger than 2G ,how to deal with? -
- Use a custom object that can hold more than a signed int in size.
- Use multiple QByteArrays to hold your data.
- Go raw with a simple unsigned char buffer.
-
@jimfar Show me the code or point me at what you are talking about with "the official recommendation".
Custom object would just be a simple class that worked like QByteArray but could store more than a signed int.
class MyByteArray : public QObject { Q_OBJECT public: MyByteArray(unsigned char *data, unsigned int szData) { std::memcpy(data_, data, szData); dataSize_ = szData; } const unsigned char *data() const { return data_; } unsigned int size() const { return dataSize_; } private: unsigned char *data_; unsigned int dataSize_; };
Something like that. I mean it's just super simple and should be expanded on quite a bit for production use, but that right there would expand you to an
unsigned int
which is 4gb.Honestly though declaring memory > 2gb in a single variable is almost guaranteed to give you trouble on most operating systems. Getting a contiguous block of memory 2gb in size will be hard.
unsigned char *x = new unsigned char(4000000000);
is fairly unlikely to succeed. Even on 64-bit OSes which are the only ones that can allocate > 2gb per process anyway, it would be unlikely you could find a single block that size available for use. This depends on the RAM available on the system and the fragmentation of the heap, but basically it would be very hard to get a single chunk of memory like that.You'd be better off breaking up your data into multiple smaller sets of vertices.
-
Are you expecting to be able to render > 2GB of data at useful frame rates in real-time? Once you add textures and framebuffers and other things, this will be significantly larger than the available memory on most GPU's, so you probably need to look at breaking things up regardless of the API details.
(Also, on some platforms int types would just be 64 bit anyway, so your question is potentially moot without some more details about what exactly you are trying to do, what platform, and hopefully some additional code that explains the context of your snippet.
-
@wrosecrans Typically if you want 64 explicitly you need
long long
which is guaranteed to be 64 bit on a 64 bit os. Most 64-bit OSes will still have a 4 byte integer which is 32-bits. It's why it's dangerous to useint *
to store generic pointers in c++. -
@ambershark said in in qt3d if my file is larger than 2G ,upto 4G,how to handle this?:
Most 64-bit OSes will still have a 4 byte integer which is 32-bits.
Correct.
int
is 32 bit and pointer changes to 64 bit.It's why it's dangerous to use
int *
to store generic pointers in c++.I think you mean, it's dangerous to use
int
to store pointers, becauseint *
is a pointer to int. -
@aha_1980 said in in qt3d if my file is larger than 2G ,upto 4G,how to handle this?:
I think you mean, it's dangerous to use int to store pointers, because int * is a pointer to int.
Oops, lol.. Yea that's exactly what I meant. I.e. I like to sometimes output pointer addresses in streams and when porting my old 32-bit code to 64 I would do things like cast a pointer to
(int)
to output it and of course would get a bad value on a 64-bit os.