Cache locality and QString
-
I have a data table, which is effectively a QList< QList< QString > >, where is row of data is QList< QString >. I am looking into ways to speed up writing to and reading rows of data. What I ideally need is for the data for consecutive QStrings in a list to have cache locality. So that fetching the data for 2 adjacent QStrings would mean the second QString's data stands a good chance of already being in cache after the first string is read.
I guess I could use something like QString::fromRawData() and manage all the memory myself. But I lose the benefits of reference counting and it looks quite ugly and error prone.
Better might to be have some sort of custom memory allocator. But I don't see any way to do that.
Has anyone tried to do something like this?
-
I'm not aware of any QT containers providing for custom allocators in the way that the STL containers do. Maybe someone else knows better...however, if you're really to the point of considering a custom allocator solution then I'd strongly suggest takign a step back and reconsidering your overall design. custom heap management is complex and would probably just end up being a bandaid on a larger algorithmic problem. I think to get any real useful assistance you'd need to explain in detail about your data table: expected size, how it is populated and how it is accessed, etc.
-
QString is an implicitly shared class, which is achieved through a private object held as a pointer in the QString. Even if the QString objects were consecutive in memory, in general the string data they contain is heap-allocated elsewhere.
If you have a large number of QStrings of a small number of shared variants you may get a degree caching efficiency where the shared data blocks are cached.
-
QString is an implicitly shared class, which is achieved through a private object held as a pointer in the QString. Even if the QString objects were consecutive in memory, in general the string data they contain is heap-allocated elsewhere.
If you have a large number of QStrings of a small number of shared variants you may get a degree caching efficiency where the shared data blocks are cached.
@ChrisW67 said in Cache locality and QString:
QString is an implicitly shared class, which is achieved through a private object held as a pointer in the QString. Even if the QString objects were consecutive in memory, in general the string data they contain is heap-allocated elsewhere.
Yes, I understand that.
@ChrisW67 said in Cache locality and QString:
If you have a large number of QStrings of a small number of shared variants you may get a degree caching efficiency where the shared data blocks are cached.
I try to do that.
I am just investigating whether there is any way to be more efficient about allocating the memory for QStrings (and the data they reference) likely to be read at the same time, for less memory allocations and better cache locality.
I guess you could allocate a row as a single QString (rather than a QList< QString> ) and then make each cell a QStringView into the QString. But that doesn't work well if you want to change one of the QStrings.
-
Hi,
Another point: how big are your strings ?
I remember work from a couple of years ago regarding SSO (Small String Optimization) that would allow not to allocate extra data on the heap but I am currently failing to find the references. -
Isn't cache locality while still being able to write to your strings a bit contradictory?
Unless you won't resize the strings (or have a larger than needed capacity buffer for each). -
Hi,
Another point: how big are your strings ?
I remember work from a couple of years ago regarding SSO (Small String Optimization) that would allow not to allocate extra data on the heap but I am currently failing to find the references. -
Isn't cache locality while still being able to write to your strings a bit contradictory?
Unless you won't resize the strings (or have a larger than needed capacity buffer for each).@GrecKo said in Cache locality and QString:
Isn't cache locality while still being able to write to your strings a bit contradictory?
Yes, a bit. ;0)
Sometimes you only want to read values from a data table, in which case cache locality is important. Other times you need to modify arbitrary values in the table, in which case cache locality is not really an issue.
-
Out of curiosity, what kind of data are you handling to have that much rows/cols ?
-
It is data wrangling software for re-shaping, re-formatting, merging, cleaning etc Excel, CSV, JSON, XML etc:
https://www.easydatatransform.com/
It it pretty fast already. But a bit of extra performance never hurts. ;0)
-
So essentially a two-dimensional dynamically sized array of strings?
Were I in your shoes, and having known the relatively short, but maximum allowable, length of the strings then I would opt for a vector of fixed sized char[] entries. That way you ARE allowing for cache hits on row-major adjacent columns of data. The second you go for a dynamically allocated string resource you throw away any guarantees of cache availability of adjacent cells.
-
So essentially a two-dimensional dynamically sized array of strings?
Were I in your shoes, and having known the relatively short, but maximum allowable, length of the strings then I would opt for a vector of fixed sized char[] entries. That way you ARE allowing for cache hits on row-major adjacent columns of data. The second you go for a dynamically allocated string resource you throw away any guarantees of cache availability of adjacent cells.
@Kent-Dorfman said in Cache locality and QString:
So essentially a two-dimensional dynamically sized array of strings?
Yes.
@Kent-Dorfman said in Cache locality and QString:
I would opt for a vector of fixed sized char[] entries.
I am reading in CSV files, Excel file etc, so the strings can be any length at all.
I can scan the entire file to look for the longest string. But that comes with it's own issues.
@Kent-Dorfman said in Cache locality and QString:
The second you go for a dynamically allocated string resource you throw away any guarantees of cache availability of adjacent cells.
Agreed. But even getting SOME cache hits would improve performance.
Also, if you are creating a million QString s in one go, it seems a bit inefficient to do a million separate memory allocations (assuming that is what QString does).
-
@Kent-Dorfman said in Cache locality and QString:
So essentially a two-dimensional dynamically sized array of strings?
Yes.
@Kent-Dorfman said in Cache locality and QString:
I would opt for a vector of fixed sized char[] entries.
I am reading in CSV files, Excel file etc, so the strings can be any length at all.
I can scan the entire file to look for the longest string. But that comes with it's own issues.
@Kent-Dorfman said in Cache locality and QString:
The second you go for a dynamically allocated string resource you throw away any guarantees of cache availability of adjacent cells.
Agreed. But even getting SOME cache hits would improve performance.
Also, if you are creating a million QString s in one go, it seems a bit inefficient to do a million separate memory allocations (assuming that is what QString does).
@AndyBrice
Then you really need to do your own "memory allocation". Of course separate memory allocations for manyQString
s will not (guaranteed) lead to some huge contiguous memory layout. Nor do I know of any other memory allocator which would guarantee to lay out many separate allocation of variable lengths as consecutive.I really wonder just how much real-time improvement you would see even if the memory was contiguous? You would need to try your own memory allocation to compare how much difference it really makes in practice, with everything else going on in your code.
-
Ok, thanks for the feedback. It looks like there is no straightforward ways to improve performance, while keeping the flexibility I need.
@AndyBrice said in Cache locality and QString:
Ok, thanks for the feedback. It looks like there is no straightforward ways to improve performance, while keeping the flexibility I need.
Correct. The optimization will come at a cost of working only when on a predictable subset of real world data. Because you've stated that you need a generalizaed solution, the optmization tricks wont work reliably.
If you can assign hard limitations to your dataset...THEN you can consider what kinds of optimiztions make sense.