Cache locality and QString
-
Isn't cache locality while still being able to write to your strings a bit contradictory?
Unless you won't resize the strings (or have a larger than needed capacity buffer for each). -
Hi,
Another point: how big are your strings ?
I remember work from a couple of years ago regarding SSO (Small String Optimization) that would allow not to allocate extra data on the heap but I am currently failing to find the references. -
Isn't cache locality while still being able to write to your strings a bit contradictory?
Unless you won't resize the strings (or have a larger than needed capacity buffer for each).@GrecKo said in Cache locality and QString:
Isn't cache locality while still being able to write to your strings a bit contradictory?
Yes, a bit. ;0)
Sometimes you only want to read values from a data table, in which case cache locality is important. Other times you need to modify arbitrary values in the table, in which case cache locality is not really an issue.
-
Out of curiosity, what kind of data are you handling to have that much rows/cols ?
-
It is data wrangling software for re-shaping, re-formatting, merging, cleaning etc Excel, CSV, JSON, XML etc:
https://www.easydatatransform.com/
It it pretty fast already. But a bit of extra performance never hurts. ;0)
-
So essentially a two-dimensional dynamically sized array of strings?
Were I in your shoes, and having known the relatively short, but maximum allowable, length of the strings then I would opt for a vector of fixed sized char[] entries. That way you ARE allowing for cache hits on row-major adjacent columns of data. The second you go for a dynamically allocated string resource you throw away any guarantees of cache availability of adjacent cells.
-
So essentially a two-dimensional dynamically sized array of strings?
Were I in your shoes, and having known the relatively short, but maximum allowable, length of the strings then I would opt for a vector of fixed sized char[] entries. That way you ARE allowing for cache hits on row-major adjacent columns of data. The second you go for a dynamically allocated string resource you throw away any guarantees of cache availability of adjacent cells.
@Kent-Dorfman said in Cache locality and QString:
So essentially a two-dimensional dynamically sized array of strings?
Yes.
@Kent-Dorfman said in Cache locality and QString:
I would opt for a vector of fixed sized char[] entries.
I am reading in CSV files, Excel file etc, so the strings can be any length at all.
I can scan the entire file to look for the longest string. But that comes with it's own issues.
@Kent-Dorfman said in Cache locality and QString:
The second you go for a dynamically allocated string resource you throw away any guarantees of cache availability of adjacent cells.
Agreed. But even getting SOME cache hits would improve performance.
Also, if you are creating a million QString s in one go, it seems a bit inefficient to do a million separate memory allocations (assuming that is what QString does).
-
@Kent-Dorfman said in Cache locality and QString:
So essentially a two-dimensional dynamically sized array of strings?
Yes.
@Kent-Dorfman said in Cache locality and QString:
I would opt for a vector of fixed sized char[] entries.
I am reading in CSV files, Excel file etc, so the strings can be any length at all.
I can scan the entire file to look for the longest string. But that comes with it's own issues.
@Kent-Dorfman said in Cache locality and QString:
The second you go for a dynamically allocated string resource you throw away any guarantees of cache availability of adjacent cells.
Agreed. But even getting SOME cache hits would improve performance.
Also, if you are creating a million QString s in one go, it seems a bit inefficient to do a million separate memory allocations (assuming that is what QString does).
@AndyBrice
Then you really need to do your own "memory allocation". Of course separate memory allocations for manyQString
s will not (guaranteed) lead to some huge contiguous memory layout. Nor do I know of any other memory allocator which would guarantee to lay out many separate allocation of variable lengths as consecutive.I really wonder just how much real-time improvement you would see even if the memory was contiguous? You would need to try your own memory allocation to compare how much difference it really makes in practice, with everything else going on in your code.
-
Ok, thanks for the feedback. It looks like there is no straightforward ways to improve performance, while keeping the flexibility I need.
@AndyBrice said in Cache locality and QString:
Ok, thanks for the feedback. It looks like there is no straightforward ways to improve performance, while keeping the flexibility I need.
Correct. The optimization will come at a cost of working only when on a predictable subset of real world data. Because you've stated that you need a generalizaed solution, the optmization tricks wont work reliably.
If you can assign hard limitations to your dataset...THEN you can consider what kinds of optimiztions make sense.