Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?
-
@Asperamanca said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
I think it may be worthwhile to step back a bit and take a look at the boundary conditions:
Indeed I think that's wise :)
How costly is it to create a MyElement?
Not much really: MyElement is only storing a QMap of (ushort, double). As the key is from an enum, I know there is a max number of entries which is less 500. In practice, I think most of them would have between 2 and 10 entries max. (I'll have to verify that with the business...)
The mass in fact is not really needed, it is just a shortcut that represent the sum of the values of the Map.What data is used to create a MyElement, and where is it stored?
Typically, the MyElement are creating by cloning an existing one and using an external app (in Fortran) that will do some heavy computations on it to at the end produce a new MyElement different from the one in Entry.
All MyElements are stored in the Heap.How often are MyElement-Classes shared between MainObjects, and what are typical numbers for sharing (i.e. is the same object shared twice or 100 times)?
I don't have access to that information. I'm not sure how I could hack the java WeakHashMap in order to get it. The only thing I made was to dump periodically the size of the cache, its number of distinct keys and finally increment a number each time an entry is found in the cache and thus shared.
On a simple exemple, the figures are that:Max Nb MyElements in cache : 35146 (reused: 158719, nb keys: 33117)
So I just know that there is 158719 objects that are shared among 35146 but I have no clue about the distribution.
Is there a meaningful way to combine multiple MyElement classes into a bigger group of object, where caching would make more sense (i.e. will it often happen that certain groups of MyElement are used together)?
This is something also implemented in the Java code but it wasn't used in production at the end. What they did was to use exactly the same principle of cache sharing on another object that encapsulate several MyElements.
For what I saw, I think it is really worth it to keep the caching on MyElement and then maybe add the other caching on the bigger structure on top. This doesn't seem to me to be incompatible.Will properties of MyElement change after construction?
No, they are final when stored in the cache to be shared. If they are in the cache, it means that at least 1 MainObject is referencing it. If finally the last MainObject stop using it, then the MyElement is automatically removed from the cache (cleaned up)
This is done automatically by the Java WeakHashMap, and I've implemented that by making the MyElement a QObject and connecting a destroyed signal to the Map where I send the Key.Do MyElements with the exact same mass have the same properties?
As I said, the mass is not really relevant. It is the sum of the properties. It is used for a quick access and a first fast way also to compare two MyElements
Also, I'm not sure using a hash as a key is smart. What happens if you get multiple MyElements with the same hash value? Will your cache give a "wrong" MyElement to a MainObject, or won't it matter, as long as the hash value is the same? Or would the MainObject have to create the MyElement it wants, and compare it to the one in the cache?
Well I'm not using a hash as a key. If you look at the code (it's here), the key is a QSharedPointer<WeakCacheKey>.
WeakCacheKey being a wrapper on a QWeakPointer<MyElement>.
The hashing function of a WeakCacheKey is storing the hashCode just to be able to remove an entry from the CentralizingWeakCache when an MyElement is destroyed. Indeed, at this point we can't dereference the MyElement to do its hash.
In the normal insertion in the CentralizingWeakCache, I'm returning:qHash(*(sharedPtr.data()));
that will ends up in the hashing function of the MyElement (called SharedObject in my implementation)
inline uint qHash(const SharedObject & sharedObject) { return qHash(sharedObject._value); }
If I've some collisions on the hash key, then it is operator== that is used on my keys which I've reimplement to derefence the weakPointers if possible
inline bool operator ==(const QSharedPointer<WeakCacheKey> &left, const QSharedPointer<WeakCacheKey> &right) { // check if weakPtrs are null (same state) bool isNull = left->_weakPtr.isNull(), otherIsNull = right->_weakPtr.isNull(); if ( (isNull && !otherIsNull) || (!isNull && otherIsNull)) return false; if (isNull && otherIsNull) return true; // both weakPtrs are not null, lets get sharedPtrs QSharedPointer<SharedObject> ptr = left->_weakPtr.toStrongRef(), otherPtr = right->_weakPtr.toStrongRef(); isNull = ptr.isNull(), otherIsNull = otherPtr.isNull(); if ( (isNull && !otherIsNull) || (!isNull && otherIsNull)) return false; if (isNull && otherIsNull) return true; // Both sharedPtrs are not null, compare the values of the objects return *ptr == *otherPtr; }
So I may have mistaken somewhere but for what I've debugged it looks it does what I want: The key of my cache is QWeakPointer<MyELement> but the hashing and comparison function are made on the value itself (if it still exists)
If the likelihood is high that different MyElements have a different mass (even if only ever so slightly), then you may have the option of creating a heap that is sorted by mass. The heap would allow you to quickly find all MyElements with a known mass. If there are multiple entries, you would have to iterate through them one by one until you find the one you want (or don't), but the same applies to the hash (although it is going to be less likely).
I'm not so familiar with heaps. I don't really need my cache to be sorted, I just need a quick access to the elements. I can't rely on the mass, but rather on the map of properties. So I imagine I would also need to kind of hash the map... so if I have to do that operation, it sounds more natural to use a QHash no?
Thanks for your help :)
-
I got a "real" use case. Just by running it until 16% of the simulation, here are the figures I got:
[MB_TRACE] Max Nb elem in cache : 593051 (reused: 5752723, nb keys: 586598
so we're probably talking about millions of entries in the QHash. The hashing function is quite good, the collision rate is only 1.09%
The sharing is quite huge, I've 5.75 millions of copies are avoidedThe thing I'm missing is to have an information on the number of MyElement that are automatically purged from the Hash because they're not used anymore by any MainObjects
PS: I'll probably get some figures for the whole simulation tomorrow or the day after...
-
Thanks for the clarifications.
In that case, your approach looks feasible. A few minor points:
- Why use QSharedPtr<WeakCacheKey> instead of WeakCacheKey directly?
- Deriving SharedObject from QObject makes it pretty heavyweight. You can get the same done by having a single, mutex-protected notification object, giving each SharedObject a reference or pointer to it, and calling a method on the notification object in the destructor of SharedObject
- If you have class members that are only changed at construction time, make them const. If you build your algorithm on the decision that these cannot change, then you should make that intention explicit (possible compiler optimizations come as a bonus)
-
@Asperamanca said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
Why use QSharedPtr<WeakCacheKey> instead of WeakCacheKey directly?
Well this is just to avoid circular header dependency and not include WeakCacheKey header in SharedObject.h but just a forward declaration. So SharedObject holds a pointer on its key on the Map. I made it a sharedPointer so I don't deal with the deletion. (I was also using it to test if the key wasn't nullptr and thus know that the SharedObject is stored in the Cache)
Can I get a pointer on the key of a QHash? I don't think so right? that is why I've put directly the key of the QHash to be a pointer.Deriving SharedObject from QObject makes it pretty heavyweight. You can get the same done by having a single, mutex-protected notification object, giving each SharedObject a reference or pointer to it, and calling a method on the notification object in the destructor of SharedObject
You're totally right. I inspired myself by some code I've read before... (it's in french but I the code isn't, you can find it here an implementation of a WeakHashTable)
I've updated the code in github, no need indeed to have neither the SharedObject nor the CentralizingWeakCache inheriting from QObject and to rely on signal/slot communication.
When the Cache put a SharedObject in its QHash, it can just give the key and an handle on itself to the SharedObject.
Then the destructor of the SharedObject looks like this:SharedObject::~SharedObject() { qDebug() << "[SharedObject::~SharedObject] destroying " << _value; if (_cache) _cache->remove(_key); }
If you have class members that are only changed at construction time, make them const. If you build your algorithm on the decision that these cannot change, then you should make that intention explicit (possible compiler optimizations come as a bonus)
well that is true but I said the property map is constant but it is only the case for the interesting instances we share.
There are some temporary MyElement that are used in the app that can evolve before deciding to make it kind of constant. You see what I mean?
So I'm not sure it would worth it to create 2 distinct class: the const MyElement and then TmpMyElement without the constness.
I would need a copy to pass from the TmpMyElemnt to the const MyElement when I want to save it no? -
@mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
Well this is just to avoid circular header dependency and not include WeakCacheKey header in SharedObject.h but just a forward declaration.
If you give WeakCacheKey a cpp file, including a non-default destructor, you should be able to forward declare SharedObject, because after all, the WeakCacheKey only holds a pointer to SharedObject. Then you are free to include WeakCacheKey.h in SharedObject.h.
There are some temporary MyElement that are used in the app that can evolve before deciding to make it kind of constant. You see what I mean?
If the intention is that, normally you will not want to change a SharedObject, but in some cases you do, I would suggest this approach:
class SharedObject { //...declarations... public: WeakCacheKey getCacheKey() const; QString getValue() const; private: void setValue(const QString& arg); friend class SharedObjectWriter; }; class SharedObjectWriter { public: SharedObjectWriter() = delete; SharedObjectWriter(QSharedPointer<SharedObject> sharedObject); void setValue(const QString& arg); private: QSharedPointer<SharedObject> _sharedObject;
That way, you make it pretty clear that writing to a SharedObject is supposed to be an exception, and not the rule. You could also include checks that only SharedObjects without a valid cache key can be written to, etc.
-
@Asperamanca
about using a QSharedPointer<WeakCacheKey> as a key of my QHash, in fact there is a better reason than just the circular dependency in the headers.If I put a simple WeakCacheKey, then the qHash fonction that would be called is:
uint qHash(const WeakCacheKey &cacheKey)
so I'm not able to modify cacheKey... or for efficiency I want to store the result of the qHash inside.
If I use a pointer or shared pointer, I'll end up in this one that I've just improved :)
inline uint qHash(WeakCacheKey * cacheKey) { if (!cacheKey->_hashComputed && !cacheKey->_weakPtr.isNull()) { QSharedPointer<SharedObject> sharedPtr = cacheKey->_weakPtr.toStrongRef(); if (!sharedPtr.isNull()) cacheKey->setComputedHashCode(qHash(*(sharedPtr.data()))); // save the hashCode } return cacheKey->_hashCode; }
I didn't add a cpp file to WeakCacheKey, rather I just took out the implementation of
bool operator ==(const QSharedPointer<WeakCacheKey> &left, const QSharedPointer<WeakCacheKey> &right)
and moved it into the CentralizingWeakCache. This way I can do only a forward declaration of SharedObject in WeakCacheKey.
(I'm not able to push the changes on github from work but I'll do it tonight from home, the main thing is the addition of this WeakCacheKey::_hashComputed boolean initialized to false that the WeakCacheKey::setComputedHashCode set to true at the same time it saves the hashCode.
I see your point with the SharedObjectWriter, you don't make the values const, you just don't expose them directly and return a copy... I thought you were suggesting to declare a const QString in SharedObject (or a const QMap<> in MyElement)
I'll think about it, nice suggestion.
-
@kshegunov
well I got more thought about using only implicit sharing.
So I make MyElement derive from QSharedData and the MainObjects using QSharedDataPointer<MyElement>The problem is that I will need to "merge" some instance so for this I need a QHash in the middle... Like in the solution I've implemented using WeakPointers.
I guess both the key and the value would be a QSharedDataPointer on my value I want to make unique.
so yeah I could fill the Cache like I'm doing with WeakPointers...
return also the one already shared if there is one.The issue is that my Cache has a QSharedDataPointer and not a WeakPointer. So I'm never gonna knows when no more MainObjects are using a MyElement and thus when I could delete some entries in the cache.
Do you see a way to do it?
Something else about the overhead of SharedPointers, well I suppose internally there is exactly the same within QSharedData no? there must be a uint for the reference count and some synchronization objects to make it thread safe...
-
I stand by my claim that you're doing nothing for a high price. Especially since I took a peek at the code you put up and I saw you're using a mutex to sync the access to the global object. I have only questions for you now.
- Do you know how heavy a
new
is compared to a stack allocation? - Do you realize how many bytes you're creating of overhead just to keep pointers around?
- Do you get that instead of doing a fully reentrant block-free copy of a small object you're locking a global structure that has your threads blocked for most of the time at high contention?
Just to wrap it up in a nice small bite - you could do a million copies in the time that each of the threads gets its share from that global mutex, and that doesn't even account for the pointer overhead, the refcounting and all that ... I am going to repeat myself now:
You're overengineering a solution to a problem that you don't have.In the end it is your code, it's your decision how you implement it, however what you asked me I answered already - you don't need any of that stuff above, C++ ain't Java!
- Do you know how heavy a
-
@kshegunov
How do you create dynamically elements on the stack?... if I allocate on the heap it's because I don't have the choice, I don't know in advance if I will create object and how many...Something I don't understand, if I go with implicit sharing, I'll need to use QSharedDataPointer to share my QSharedData no?
I think I can't just not factorize Items that are creating by different MainObjects. It's not only the 20Bytes of the objects, it is also the data they point to: the whole content of their QMap.
I'm not in the easy situation where MainObjects are passing to each other a shared object, they create it on their own (in their corner, somewhere in the heap) and if they create the same MyElement, it is a shame to not factorize it. You see what I mean? by default I may never be able to use implicit sharing if I don't have some kind of manager or cache in the middle to merge 2 distinct MyElements in one that the MainObjects will be able to share...I think you may not get the use case...
My MainObjects are creating dynamically some smaller objects MyElements, they don't pass them to each other, they do some computation on them and store them in a list. The thing is that those MainObjects will create many times the same instance of MyElement that another MainObject has already create. But it has no way to know about that so it can't just share it directly. I don't see how you could achieve it without an intermediate in the middle.Are you basically telling me that your way would be to never share any small objects, even if you have a billion of them?
-
I'm not attempting to be insulting (just in case): I don't think you understood what @kshegunov has been saying.
Regarding performance or anything actually. Instead of ask him/her/us this - why not just see how capable your hardware / design is, where it falls down first?Can you not do something like break your thoughts out? This feels so crazily complicated just for a couple of object types to be stored in memory?
std::vector<MainObject> mainObjects
std::vector<MyElement> elements
std::map<MainObject*, MyElement*> objectElementMaphttps://en.cppreference.com/w/cpp/container - this is the std:: lib containers - the problem of storing objects in memory is totally done you just have to decide what is appropriate
I'll reiterate @kshegunov's statement:
You're overengineering a solution to a problem that you don't have.
You are pre optimizing something you don't know. I'd argue you need to simplify your design first but who knows - maybe I just don't get it. I know I don't understand what you want / need / would be best.
Stop trying to think how to squeeze every last drop - you are in c++ - you will get your time to optimize but just choose a dam std::vector and be done with it =) when it breaks, get another and maybe even a thread.
You're missing the point just how fast c++ is - and also how little memory it uses (as opposed to frameworks and their ungodly GC .. et al. )
I'm not sure there's a clear answer but I'd recommend:
stop
trying to prematurly optimize (it likely won't be a problem from what I can best tell)
take a look at your design
simplify -
@6thC
well, I've spent some time (1 month) to understand and analyse a Java application that I'm supposed to port in C++. It is an heavy memory/CPU consuming app that does many physics calculations.
The Java app has been used for let's say the past 15 years and has been patched times to times to improve performances either to limit the memory usage or to parallelize calculations to gain in speed.
I've been told that the WeakHashMap has drastically decrease the memory usage, and for what I saw, avoiding millions of copies, I think it makes sense, whatever the language used.
I've a limited amount of time for a project quite complex (if I fail or am late there are daily fees...). I believe it is at the design phase at the beginning that you should consider most of the "actors / features" you will / may use to plan a way to combine them properly. Centralizing most of the elementary data structure seems to me something not obvious and that is better to plan in advance.
To limit the risk that I must use no more memory than the current app and have at least equivalent speed, I prefer to start straight with the optimizations they used in Java, I'll be free later, if I've some time left (before the deadline), to check without if there is a feeling it is not needed and that saving a little in memory impact a lot in performance.
And come on, Java is not as efficient than C++ but nowadays it has been quite optimized, I don't believe that only because I'm using C++ I can just throw out some optimization techniques that have been thought, tested and validated in Java. -
I have a problem with this statement: "told that the WeakHashMap has drastically decrease the memory usage" - mainly because this may be a perfectly valid statement - for java.
You aren't in java anymore. You are closer to hardware, things work fast. This is not being me being a dick about langues hating - it's just a fact. Yes start with good design - that's why I suggested something completely different because I'm not confident I can help you if I cannot understand the basic components - all I see is 2 related objects and no basic types etc, I cannot see purpose or anything just the one scoped view / context you've presented.
I have no idea of the big picture what or why you'd store things like - I am entirely prepared for a valid reason though I cannot see it myself. I'm not being condescending - I have tried reading it so many times but I have my own work to do too.
So being limited to the smaller context and using your own object context/concepts: I tried to show you a way of sharing objects/data where you could:
- use vectors - which:
use very minimal memory amounts
store objects in contiguous memory - why is this important?
to utilize the cpu memory and not access the main memory which is much slower - maintain objects relationships
- be fast
- not have to think too hard on algorithms or access - the plan is to just fly through it (very, very quickly)
The map I suggested was specifically:
std::map<MainObject*, MyElement*> objectElementMap
keyed on MainObject - so we can have multiple MyElements but we can objectElementMap.find(MainObject* - i.e. this);
get the "shared" object* - do whatever resource guarding to not totally seg fault or corrupt your object state.it's not a map of object instances, just a ptr, ptr map and seriously, if you are processing everything anyhow, you could probably just fly through a vector<pair<ptr,ptr>> anyhow.
You have my sympathies for being pushed into a shit spot by the sounds of things but - rush jobs - get rushed.
It doesn't need to be shoddy, and I was just trying to break out your thinking.
We 100% are not here telling you to throw out design or optimization thoughts - everything we have been telling you is with lightweight efficiency and fast access in mind...
Prove you have a performance problem first. If you are stuck to your design, what are we even talking about - get it down and running already. Start collecting execution times and prove performance problems are even a concern.
I do wish to help, I'm not sure I am at this point, I'm not here to condescend but do want to help. I abuse the shit out of my CPU and well, who knows - you can always make a cpu killer problem - yours doesn't sound that. I wouldn't be worried can c++ keep up with java though. It will beat the pants off it honestly.
JAVA might be good for RAD (I think I just caught something saying that) - but everything else I hate. The memory footprint, the GC... sorry. GC was just a horrible and shit idea. I much prefer pay for what is used/you make and cleanup what you make.
That's why we use Qt. C++ && Qt == fast rad gui
Anyhow, not sure I'm helping anymore, good luck. Again, I wouldn't worry about performance / at all / prematurely. Prove it. Once you have one you will know and if you can't see and feel it - you can measure it at least.
How can you measure a potential performance issue without a running process? It's all theory at that stage and sounds like you've been given a directive "get it done"
- use vectors - which:
-
@6thC
The way I understood it, the main concern is memory size, not CPU usage. I don't see how your approach covers that. How can you safely re-use objects in a vector, and know when these objects can be safely destroyed, unless you add some kind of key-access-approach and reference counting?@6thC said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
use vectors - which:
use very minimal memory amountsThat dies as soon as you use a Qt-container within MyElement, such as QMap/QHash. So you will have the actual content of the objects all over the heap anyway.
If the primary goal were performance, I'd agree with you - use contagious memory and utilize the memory caches. But that doesn't seem to be the case here.
-
@6thC
what means RAD?
what I'm trying to achieve here is to reduce the memory that the app will use.
so I give you again the big picture. I've a Vector of 150 MainObjects. During a simulation that I have to manage sequentially, those MainObjects will call an external Fortran application that will do some heavy computations (between 1 to 6 seconds) to then send back a MyElement that they (the MainObject) will store.
So the MyElements are different instances. They can't be implicitly shared as they're not passed from a MainObject to another.
Now here is some figures of a real use case I got yesterday:[MB_TRACE] Nb elem in cache : 5059084 (reused: 39457190, nb keys: 5047742
So: there are 5059084 + 39457190 MyElements that are created. 45 Millions!
If I do nothing, I've 45 millions distinct MyElements created in distinct part of the heap and that does not share any of their content.
MyElement is a double plus map so 20 Bytes. The map entries are (short, double), let's say I've 10 entries, this mean 100 Bytes. So in total I get 120 Bytes.
120 Bytes * 45 Millions = 5.03 GB
That is just too much!What the log is telling me is that in fact there are only 5 Millions distinct MyElements (in value) within those 45 Millions.
So I'm wasting 40 Millions times the 100 Bytes of the content of the MyElements Maps.
Do you see the point now?
Whether I'm in Java or in C++, it doesn't change anything to that fact, I don't want to have 40 millions times 100 Bytes in my heap that could be merged. (that 4 GB lost)So I need a intermediate object that play the role of a Cache.
When a MainObject gets back a brand new MyElement (from the external app), it must ask to the cache if it someone has already created this MyElement and if it is the case the Cache send back a pointer on the SHARED instance of the MyElement. The MainObject will destroy the brand new MyElement it got from the external app and use the one of the Cache.I can't describe it more. For me the need is obvious...
Cause potentially I wish to run several simulations in my app and keep their results in memory. I can't just do that 5 times if each one eats like 6GB.
We can have access to big workstations but what's the point to waste memory... you would rather use a server that could store up to 20 simulation results....Anyway I'm going to implement a Centralized cache to store the MyElements. The goal is just to share the MyElements. So the basic structure that come in mind would be a Set or a Vector.
This is just not efficient as I'll have 5 Millions entries and will need to find some particular instances all the time (comparing by value). A map also wouldn't be efficient. For me there are no doubt that the best way to go is a Hash. Especially when I know that I've a good hashing function, that gives me less than 0.5% collisions (100−5047742×100/5059084 = 0.22%)Now the question is (and that is why I started this post) which technology to use within this hash... Implicit shared data (QSharedDataPointer) or standard smart pointers (QWeakPointer and QSharedPointer).
Well the advantage I see with the standard smart pointer is the QWeakPointer.
If my cache store only QWeakPointers, that means it doesn't "own" the data it is storing, which means that when nobody is using the data the WeakPointer is Null and thus it is possible to remove that entry. That is what I am achieving with the code I've developed. You might find it over engineered but that is what it does and what I'm after.You didn't explain me (and this is what I would like you to do if it is possible) how I could do the same using implicit sharing within the Cache. I'm not familiar with implicit sharing, I'm a passive user of it, just using the QT object that already use implicit sharing behind the scene (QString, QMap...)
If I have well understood, if I'd like to use implicit sharing in my case, I would make MyElement derive from QSharedData and so my cache would become a QHash<QSharedDataPointer<MyElement>, QSharedDataPointer<MyElement>>
So if I'm right, in that case how would you know that an entry in my QHash is not used anymore. In other term that the reference count of the MyElement has dropped to 2. (I say 2 because 1 for the key + 1 for the value QSharedDataPointer)
I don't see the solution, I would say like this that it wouldn't be possible....
Let me know if you see a way... -
@Asperamanca
Thanks for the understanding ;) haha
Indeed, the need here is only to decrease memory usage and preferably in the most optimized way in term of access time.
For the CPU usage, well it doesn't depend on me, the consuming part is the external Fortran app. The only thing I can do to boost the performance is to parallelize the maximum of instance of those calculation.
Basically I'm achieving this by using lazy evaluation for the calculations of the MyElements. I've opened a post few weeks ago on this topic. (this one)
Some might also find it too much over engineered but well, it is what it is and does the job perfectly. I mean I hope it will, my app is not implemented yet... (I've at least few months work before being able to test ) -
Short answer on implicit sharing: It's not for your use case, because the use count would never automatically fall back to zero, because of the cache.
Basically, implicit sharing uses the same principles as shared pointers under the hood. The advantage is that you can completely hide it from users of your class, the users can simply value-copy your classes for cheap. The class will automatically create deep copies whenever a non-const member function accessing the data is called.
It's wonderful when you follow the standard implementation pattern: Private data member derived from QSharedData, QSharedDataPointer as only data member of the main class, consistent use of 'const' for methods.
Lacking a weak pointer equivalent of QSharedDataPointer, and the ability to manipulate the reference count, I would keep my hands off in your case. Use QSharedPointer or std::shared_ptr instead. If you get tired of writing std::shared_ptr<MyElement>, you can make a typedef.
-
@Asperamanca
ok thanks for the answer, that is what I thought for the implicit sharing but I was feeling maybe there could be a way...
I'm using c++11 "using" keyword nowadays instead of typedef, I find it more convenient for functors and more readable in general.
I'm also using QSharedPointer and QWeakPointer instead of std ones directly. I guess/hope they are using them under the hood. I just prefer to stay full QT in my code if possible. I think I'm only using std for the math library and std::sort. -
@mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
The only thing I can do to boost the performance is to parallelize the maximum of instance of those calculation.
About this: Your current approach using a mutex to protect your cache-hash could become a bottleneck. I would consider splitting operations on the cache cleanly into read and write operations, and use QReadLocker and QWriteLocker, since based on your numbers, 9 out of 10 times an object that you need should already exist in cache (which would make the access a read-only thing).
In addition, you could then further optimize by balancing the number of write operations vs. the length of a single write operation. A way to do this would be to delete cache entries not directly, but via a delete queue: You put entries for deleting into a queue, and process the queue at defined intervals. You can then fine-tune how often the queue is processed. -
@Asperamanca
wow cool I wasn't aware that QReadWriteLock existed! It will definitely improve performances as there should be more reading operations that writing ones. (a factor 10 maybe) and yeah reading shouldn't be blocking as long as nobody is trying to write.
Thanks for that.
The idea of queueing the deletion is also great! I can just do it like every 1000 entries or more (as you said I can tune that later)
Cheers! -
@mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
Do you see the point now?
I don't!
QMap
and all the Qt containers are implicitly shared already. Non writing calls to the map will not cause memory copy and all the objects are going to point to the same structure in heap. Your calculation of gigabytes of data is just bogus.Whether I'm in Java or in C++, it doesn't change anything to that fact, I don't want to have 40 millions times 100 Bytes in my heap that could be merged. (that 4 GB lost)
Sure you don't want to, however it would be useful to get familiar with the specifics of the language you're now using and what is happening behind the scenes before you decide to microoptimize something that's not even a problem. On that note did you create a test case that demonstrates how fast and how much less memory your weak-referenced mutex-protected cache is compared to what I suggested - directly passing those objects by value? And I mean a test case not some calculations that we run on fingers ...
So I need a intermediate object that play the role of a Cache.
No you don't need that, and I fail to see why are we continuing this argument. Run a test, and then show me your aces! Show me how fast is that cache and how much memory it spares!
I mean, I've been coding C++ for more than 10 years, convince me that I should throw that experience away and trust you instead.Now the question is (and that is why I started this post) which technology to use within this hash... Implicit shared data (QSharedDataPointer) or standard smart pointers (QWeakPointer and QSharedPointer).
Moot due to the above.
If I have well understood, if I'd like to use implicit sharing in my case, I would make MyElement derive from QSharedData and so my cache would become a
You have understood correctly. You still don't need it, but you can do it like this.