Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?
-
@kshegunov said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
Yes, and QSet is already a hash.
well true, I'm used to std::set that is ordered...
but anyway, I would still have an issue to be able to know which Element is not used anymore by any MainObjects.
check my code, I'm talking about this feature that is important in the use caseSharedObject::~SharedObject() { qDebug() << "[SharedObject::~SharedObject] destroying " << _value; if (_cache) _cache->remove(_key); } void CentralizingWeakCache::remove(const QSharedPointer<WeakCacheKey> &key) { _secureOsoleteStack.lockForWrite(); _obsoleteKeys[_nextObsoleteKeyIndex++] = key; if (_nextObsoleteKeyIndex == _sizeMaxOfObsoleteStack) { _secureOsoleteStack.unlock(); _cleanUpCache(); } else _secureOsoleteStack.unlock(); qDebug() << "[CentralizingWeakCache::handleSharedObjectDestruction] schedule removal"; } void CentralizingWeakCache::_cleanUpCache() { qDebug() << "[CentralizingWeakCache::_cleanUpCache] cleanUp: we've " << _sizeMaxOfObsoleteStack << " obsolete keys..."; QWriteLocker lockCache(&_secureCache); QWriteLocker lockStack(&_secureOsoleteStack); // write lock cause we will "unlink" its data (_nextObsoleteKeyIndex back to 0) for (ushort idx = 0; idx < _sizeMaxOfObsoleteStack ; ++idx) _cache.remove(_obsoleteKeys[idx]); _nextObsoleteKeyIndex = 0; }
@kshegunov said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
Fine, so the thing you want to share is not the elements per se, but the property lists, correct. Then do so!
Firstly, let's redesign a bit. You want the properties in a vector, not in a map so the memory's contiguous and you can iterate over it quickly. Ordering is an afterthough that can be done once, after the property list is initialized:Those are small improvements, that I could consider only if I'm not happy with the overall performances...
But at the end, I would have to re-create a Map on top of my Set so I don't see the point.
I need to be able to query any values of my enum to see if it is included in the properties. I want to be able to access the values by giving the enum key...@kshegunov said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
See above, all from the point the object is constructed initially can then be done with simple lockless copies by passing it by value ...
why you say lockless, you do use a QReadWriteLock exactly like I'm doing in my solution.
Then see above, passing by value make me loose the information of having a dangling pointer, which can be tested via a WeakPointer. By value I will never have the equivalent of a null WeakPointer...@kshegunov said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
It does what it's supposed to, it shows how the thread serialization affects performance.
well your second post maybe, but the one the comparison of my pseudo use case using TreadCache and ThreadCopy is not fair as the ThreadCopy didn't use the cache and thus the QReadWriteLock...
@kshegunov said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
Humongous. The memory allocation for the hash is done in one single step for all elements, not in million calls to the runtime! Also using the stack frees you from any obligations to remember or think of when to call delete, which is one of the points to wrap pointers to heap objects in stack objects (i.e. the beloved smart pointers).
well sorry mate but you're wrong...
This is qhash code for insertion:template <class Key, class T> Q_INLINE_TEMPLATE typename QHash<Key, T>::iterator QHash<Key, T>::insert(const Key &akey, const T &avalue) { detach(); uint h; Node **node = findNode(akey, &h); if (*node == e) { if (d->willGrow()) node = findNode(akey, h); return iterator(createNode(h, akey, avalue, node)); } if (!std::is_same<T, QHashDummyValue>::value) (*node)->value = avalue; return iterator(*node); }
and what is doing createNode?
template <class Key, class T> Q_INLINE_TEMPLATE typename QHash<Key, T>::Node * QHash<Key, T>::createNode(uint ah, const Key &akey, const T &avalue, Node **anextNode) { Node *node = new (d->allocateNode(alignOfNode())) Node(akey, avalue, ah, *anextNode); *anextNode = node; ++d->size; return node; }
So your inserted elements goes on the heap...
-
kshegunov Moderatorsreplied to mbruel on 10 Aug 2018, 19:47 last edited by kshegunov 8 Oct 2018, 19:48
I give up. I'll only add one small flare before I go:
@mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
Node *node = new (d->allocateNode(alignOfNode())) Node(akey, avalue, ah, *anextNode);
Do you know what a placement new is? If not, then you should google it.
-
@kshegunov
I've googled it... ':$
ok I see... yeah you skip the single allocation part, my bad...I guess what you're doing with a QVector<QPair<PropertyType, double>> would work the same with a QMap<PropertyType, double> in term of lookup in the cache if I don't want to bother right now implementing few handy methods to make the structure act like a map.
I suppose I could be able to implement it in that way and instead of trying to merge an object, directly create it using the cache and the shared data. I'm going to look more into this.
But I'm still suck for the part of cleaning cache. I need some kind of reference count on the the items of the cache so I could know when I could remove an entry that became obsolete (when it isn't used by anybody else than the cache...)
do you think this would be achievable? I don't see how without using a WeakPointer as a key...
-
Maybe I could implement myself the reference count... Use a QHash with my PropertyList as a key and a reference count as value.
then in the destructor of Element, if it is using a property list from the cache (making sure it hasn't detached) then I could release from the cache which would decrease the reference count... I can schedule my cleanUp when it drops to 0.
What do you think about this approach?
I'll test it and compare the performance against my current implementation with pointers.
I'm just a bit concern that I may have to be careful to know if an Element PropertyList has detached from the source... -
@kshegunov
mea culpa mate...
I've tried to implement some kind of solution with values only, so as I said in previous post, doing myself the reference count of the shared objects owned by my cache.
It turns out, I'm doing few more copies (I'd say 3) than with the pointer version and the performances are worst than with the pointer version.
Here is what I've done:SharedObject.h
#ifndef SHAREDOBJECT_H #define SHAREDOBJECT_H //#define __DEBUG_TRACE__ 1 #include <QSharedPointer> #include <QDebug> class CentralizingCache; class SharedObject { public: friend uint qHash(const SharedObject & cacheKey); friend class CentralizingCache; // to set the cache and the key explicit SharedObject(const QMap<ushort, double> &content = QMap<ushort, double>(), bool isShared = false, bool isCacheKey = false, uint hashCode = 0); // shallow copy (the QMap is shared but can be detached) SharedObject(const SharedObject &other); SharedObject &operator =(const SharedObject &other) ; SharedObject(SharedObject &&other); void setContent(ushort property, double value); SharedObject getUnique(); virtual ~SharedObject(); virtual bool operator==(const SharedObject &other) const; QString str() const; private: uint computeHashCode() const; SharedObject copyFromCache() const; void detach(); void increaseCacheRefCount(); void updateMass(); private: double _mass; QMap<ushort, double> _content; bool _isShared; bool _isCacheKey; mutable uint _hashCode; static CentralizingCache *sCentralizingCache; }; #endif // SHAREDOBJECT_H
The SharedObject.cpp
#include "SharedObject.h" #include "CentralizingCache.h" CentralizingCache *SharedObject::sCentralizingCache = CentralizingCache::getInstance(); SharedObject::SharedObject(const QMap<ushort, double> &content, bool isShared, bool isCacheKey, uint hashCode) :_mass(0.), _content(content), _isShared(isShared), _isCacheKey(isCacheKey), _hashCode(hashCode) { updateMass(); } SharedObject::SharedObject(const SharedObject &other) { _mass = other._mass; _content = other._content; _isShared = other._isShared; _isCacheKey = other._isCacheKey; _hashCode = other._hashCode; increaseCacheRefCount(); } SharedObject &SharedObject::operator =(const SharedObject &other) { _mass = other._mass; _content = other._content; _isShared = other._isShared; _isCacheKey = other._isCacheKey; _hashCode = other._hashCode; increaseCacheRefCount(); return *this; } SharedObject::SharedObject(SharedObject &&other) { _mass = other._mass; _content = other._content; _isShared = other._isShared; _isCacheKey = other._isCacheKey; _hashCode = other._hashCode; other._mass = 0; } void SharedObject::setContent(ushort property, double value) { auto it = _content.find(property); if (it != _content.end() ) { if (it.value() != value) { detach(); it.value() = value; updateMass(); } } else { detach(); _content.insert(property, value); updateMass(); } } SharedObject SharedObject::getUnique() { return sCentralizingCache->getCentralizedValue(*this); } SharedObject::~SharedObject() { #ifdef __DEBUG_TRACE__ qDebug() << "[SharedObject::~SharedObject] destroying " << str(); #endif detach(); } #include <cmath> bool SharedObject::operator==(const SharedObject &other) const { if (_mass != other._mass) return false; if (_content.keys() != other._content.keys()) return false; for (auto it = _content.cbegin(), itEnd = _content.cend(), itOther = other._content.cbegin(); it != itEnd; ++it, ++itOther) { if (std::fabs(it.value() - itOther.value()) > 1e-5) return false; } return true; } QString SharedObject::str() const { QString str = QString("mass: %1, isShared: %2, isCacheKey: %3, (hashCode: %4) content: ").arg( _mass).arg(_isShared).arg(_isCacheKey).arg(_hashCode); for (auto it = _content.cbegin(), itEnd = _content.cend() ; it != itEnd; ++it) str += QString(" <%1 : %2>").arg(it.key()).arg(it.value()); return str; } uint SharedObject::computeHashCode() const { _hashCode = _mass; return _hashCode; } SharedObject SharedObject::copyFromCache() const { return SharedObject(_content, _isShared, false, _hashCode); } void SharedObject::detach() { if (_isShared && !_isCacheKey) { sCentralizingCache->remove(*this); _isShared = false; } } void SharedObject::increaseCacheRefCount() { if (_isShared && !_isCacheKey) sCentralizingCache->incRefCount(*this); } void SharedObject::updateMass() { for (double mass : _content.values()) _mass += mass; }
CentralizingCache.h
#ifndef CENTRALIZINGCACHE_H #define CENTRALIZINGCACHE_H #include "Singleton.h" #include <QHash> #include <QReadWriteLock> #include "SharedObject.h" inline uint qHash(const SharedObject & elem) { if (elem._isShared) return elem._hashCode; else return elem.computeHashCode(); } class CentralizingCache : public Singleton<CentralizingCache> { public: friend class Singleton<CentralizingCache>; friend class SharedObject; // to use incRefCount CentralizingCache(); SharedObject getCentralizedValue(SharedObject &srcElem); void remove(const SharedObject &elem); inline int size() const; void dump(const QString &msg = ""); private: void incRefCount(const SharedObject &elem); private: QHash<SharedObject, uint> _cache; mutable QReadWriteLock _secureCache; }; int CentralizingCache::size() const { QReadLocker lockCache(&_secureCache); return _cache.size(); } #endif // CENTRALIZINGCACHE_H
CentralizingCache.cpp
#include "CentralizingCache.h" CentralizingCache::CentralizingCache() :Singleton<CentralizingCache>(), _cache(), _secureCache() { } SharedObject CentralizingCache::getCentralizedValue(SharedObject &srcElem) { QWriteLocker lockCache(&_secureCache); auto it = _cache.find(srcElem); if (it != _cache.end()) { uint &val = it.value(); ++val; #ifdef __DEBUG_TRACE__ qDebug() << "[CentralizingCache::getCentralizedValue] returning shared value for " << srcElem.str() << ", nb shared = " << val; #endif return it.key().copyFromCache(); } else { srcElem._isShared = true; SharedObject cachedObject(srcElem); cachedObject._isCacheKey = true; _cache.insert(cachedObject, 0); // the increments are done in the copy operators #ifdef __DEBUG_TRACE__ qDebug() << "[CentralizingCache::getCentralizedValue] inserting new shared value for " << cachedObject.str(); #endif return srcElem; } } void CentralizingCache::remove(const SharedObject &elem) { QWriteLocker lockCache(&_secureCache); auto it = _cache.find(elem); if (it != _cache.end()) { uint &val = it.value(); --val; #ifdef __DEBUG_TRACE__ qDebug() << "[CentralizingCache::remove] releasing shared value for " << elem.str() << ", nb shared = " << val; #endif if (val == 0) { #ifdef __DEBUG_TRACE__ qDebug() << "[CentralizingCache::remove] removing shared value from cache for " << elem.str(); #endif _cache.erase(it); } } } void CentralizingCache::dump(const QString &msg) { qDebug() << "[CentralizingCache::dump] " << msg << " Size = " << _cache.size(); for (auto it = _cache.cbegin(), itEnd = _cache.cend() ; it != itEnd; ++it) qDebug() << "\t" << QString("- <nbShared: %1> <elem: %2>").arg(it.value()).arg(it.key().str()); } void CentralizingCache::incRefCount(const SharedObject &elem) { auto it = _cache.find(elem); if (it != _cache.end()) { uint &val = it.value(); ++val; #ifdef __DEBUG_TRACE__ qDebug() << "[CentralizingCache::incRefCount] increment refCount for " << elem.str(); #endif } }
And finally the test (main.cpp)
#include <QCoreApplication> #include "SharedObject.h" #include "CentralizingCache.h" #include <QDebug> #include <QElapsedTimer> static CentralizingCache *cache = CentralizingCache::getInstance(); void dumpElements(const QVector<SharedObject> &elems, const char * vectorName) { #ifdef __DEBUG_TRACE__ qDebug() << "Dumping elements of " << vectorName; for (auto it = elems.cbegin(); it != elems.cend() ; ++it) qDebug() << "\t - " << it->str(); CentralizingCache::getInstance()->dump(); #endif } void test2(uint nbObjects) { qDebug() << "\n\ntest2 with " << nbObjects << " (items in cache: " << cache->size() << ")"; QElapsedTimer cacheTimer; cacheTimer.start(); QVector<SharedObject> firstObjects; firstObjects.reserve(nbObjects); for (uint i = 0 ; i < nbObjects ; ++i) { firstObjects.append(SharedObject({ {i, i}, {i+1, i+1}, {i+2, i+2}})); } dumpElements(firstObjects, "firstObjects"); qDebug() << "Let's make them unique >>>>>>>>>>>>>>"; for (uint i = 0 ; i < nbObjects ; ++i) { SharedObject &obj = firstObjects[i]; #ifdef __DEBUG_TRACE__ qDebug() << "Make unique elem #" << i << ": " << obj.str(); #endif firstObjects[i] = obj.getUnique(); } qDebug() << "Let's make them unique <<<<<<<<<<<<<<<"; dumpElements(firstObjects, "firstObjects"); qDebug() << "Do some shared copies >>>>>>>>>>>>>>"; QVector<SharedObject> sharedObjects; sharedObjects.reserve(nbObjects); for (uint i = 0 ; i < nbObjects ; ++i) { sharedObjects.append(cache->getCentralizedValue(firstObjects[i])); } qDebug() << "Do some shared copies <<<<<<<<<<<<<<<"; dumpElements(sharedObjects, "sharedObjects"); SharedObject &detachedObject = firstObjects[nbObjects-1]; detachedObject.setContent(42, 42); dumpElements(firstObjects, "\n\nDetaching last firstObjects"); firstObjects[nbObjects-1] = detachedObject.getUnique(); dumpElements(firstObjects, "\nMaking unique detached object"); qDebug() << "Destroying last half of firstObjects >>>>>>>>>>>>>>"; for (uint i = 0 ; i < nbObjects/2 ; ++i) firstObjects.pop_back(); qDebug() << "Destroying last half of firstObjects <<<<<<<<<<<<<<<"; dumpElements(firstObjects, ""); qDebug() << "Destroying all shared copies >>>>>>>>>>>>>>"; sharedObjects.clear(); qDebug() << "Destroying all shared copies <<<<<<<<<<<<<<<"; #ifdef __DEBUG_TRACE__ cache->dump(); #endif qDebug() << "\nExecution Time using CentralizingCache with " << nbObjects << " items: " << cacheTimer.elapsed(); } int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); test2(500); test2(5000); test2(50000); test2(500000); test2(5000000); return a.exec(); }
So here are the results in Release mode:
Execution Time using CentralizingCache with 500 items: 1 Execution Time using CentralizingCache with 5000 items: 8 Execution Time using CentralizingCache with 50000 items: 72 Execution Time using CentralizingCache with 500000 items: 705 Execution Time using CentralizingCache with 5000000 items: 7244
Same test with the Pointer version (I've updated the code to use the same test, it's available here)
Execution Time using CentralizingWeakCache with 500 items: 1 Execution Time using CentralizingWeakCache with 5000 items: 6 Execution Time using CentralizingWeakCache with 50000 items: 53 Execution Time using CentralizingWeakCache with 500000 items: 481 Execution Time using CentralizingWeakCache with 5000000 items: 4979
So with my tests, I'm having better performances using the Pointer version.
What am I doing wrong?
The test is reflecting the usage I will have of the SharedObject. (Element)PS: and something is missing in the value version, I should use Atomics to hold the reference count instead of simple uint. I guess this would drop even a bit more the performances.
PS2: one thing that would be great is if we could access directly the reference count on a QSharedData but I suppose it is hidden...
-
There's something I'm missing, why add
SharedObject::setContent
to the class. You said that after setting it up the property list is not going to change. Did I misunderstand?I mean this function causes a detach in the property list so you are actually working with a separate copy when you do that, not with the one that you get from the cache.
-
Well, the property list is constant for the Element (SharedObject) that are in the cache but they are not constant in the normal use case. Nobody should ever modify the entries in the cache. We can just share them more, add new constant entries or delete some if nobody use them anymore. So yeah they are constant but only for those "special" ones.
Basically some MainObjects are creating Elements, doing some calculations on them, so modifying the properties and at one point they want to save the "final" state. That's this final state that I want to share and put in my Cache. All the intermediate steps were just "transitional" (either on a single Element or using intermediate ones that will be deleted)
It could happen that a MainObject starts with a shared Element, I know that I will detach from the Shared value, that is why I've coded a detach function in Element that is called in setContent.
I've used this behaviour in my main.cpp:SharedObject &detachedObject = firstObjects[nbObjects-1]; detachedObject.setContent(42, 42); dumpElements(firstObjects, "\n\nDetaching last firstObjects"); firstObjects[nbObjects-1] = detachedObject.getUnique(); dumpElements(firstObjects, "\nMaking unique detached object");
So detachedObject is not a Cached value as soon as I call setContent.
in setContent I do a detach to inform the cache to decrease the reference count:
void SharedObject::detach() { if (_isShared && !_isCacheKey) { sCentralizingCache->remove(*this); _isShared = false; } }
I'm free later one to reinsert the new value in the cache as done by the line
firstObjects[nbObjects-1] = detachedObject.getUnique();
PS: this behaviour is the pointer version requires a clone of the Element before any modification. The same code of the cpp file looks like this:
ObjPtr &detachedObject = firstObjects[nbObjects-1]; firstObjects[nbObjects-1] = detachedObject->clone(); detachedObject->setContent(42, 42); dumpElements(firstObjects, "\n\nDetaching last firstObjects"); firstObjects[nbObjects-1] = cache->getCentralizedValue(firstObjects[nbObjects-1]); dumpElements(firstObjects, "\nMaking unique detached object");
So I've to be careful to clone before any modification. But in the other hand I don't have to manage the detach in the setContent neither to care about the reference count of the sharing in my Cache. The smart pointers will do the job for me because the Element itself will be destroyed when nobody use it anymore and my cache will get informed at this point. That's the handy part of using a WeakHashTable ;)
-
@mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
one thing that would be great is if we could access directly the reference count on a QSharedData but I suppose it is hidden...
It's not hidden it's public:
QSharedData sharedObj; int counter = sharedObj.ref.load();
but it's a bad idea to touch it - it's undocumented (thus subject to change without notice). For the containers (
QVector
,QMap
, etc.) it is in the private class, so it's inaccessible there.Something more that bothers me - if you're going to copy out and change the data what's really the point of caching it to begin with, it would give you only minute gain, if any? Or to rephrase, I'd start by doing a very quick test case where I have a real-world example of how much objects of a given type are in use at any one time (not in the future, at one moment) - no caching at all. Could you do that?
-
It's not hidden it's public:
QSharedData sharedObj;
int counter = sharedObj.ref.load();well good to know, it could avoid me to re-create a reference count...
I won't touch it but at least I could know when the reference count is 1 and thus it would mean that the only instance existing is the one in my cache which means nobody else is using it and so I could delete the entry from the cache.
As you said it isn't documented so I thought it was totally private following the pImpl pattern. This should definitely be documented! it could be really useful in my kind of use case.I'd start by doing a very quick test case where I have a real-world example of how much objects of a given type are in use at any one time (not in the future, at one moment) - no caching at all. Could you do that?
I've done it. Here are the numbers for one simulation :
[MB_TRACE] Nb elem in cache : 5059084 (reused: 39457190, nb keys: 5047742
So there is no doubt, I'm factorizing 45 millions distinct copy in just 5 millions in a cache. (this is where the 3GB were coming from if I'm not sharing the map of properties)
(And a user would keep the app opened and launch several simulation so the sharing factor could be multiplied)Those (the instance in the cache) are the "important" instances that I store. The other one that detach are just intermediate steps so they don't matter. I just don't want to bother to make 2 distinct classes and have to copy from the intermediate class to a final (const) one. It is just easier to allow the flexibility of modification and thus to detach from the cache.
Anyway, if we just go on this kind of implementation, do you see anything wrong or inefficient with the code I've posted? cause at the end, even if I save unique allocation for each instance (going up to 5 millions), I've extra copies with the value version and the perf in my use case are better with the pointer version...
I don't think that relying on the QSharedData ref count would change anything significant in the implementation of my cache as a QHash as I didn't protect the ref count I've implemented with atomics.
-
@Asperamanca said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
In addition, you could then further optimize by balancing the number of write operations vs. the length of a single write operation. A way to do this would be to delete cache entries not directly, but via a delete queue: You put entries for deleting into a queue, and process the queue at defined intervals. You can then fine-tune how often the queue is processed.
Hi, I'm coming on this cause I think in fact that using a delete queue is a wrong pattern that can bring to segmentation fault.
Indeed, if I had an Element in my Cache, then the system stop using it so its destruction imply the removal in the Cache. If the removal is not immediate and that the key is stored in a delete queue, imagine that for any reason, the system recreate the same Element (same hash code, same equal comparison) then we should have again the same weak key back in the cache. If the delete queue kicks in, then it is the new shared Element that will be destroyed and there still may have reference to it in the system that will crash when we will use them.
Do you see the point? -
@mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
@Asperamanca said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
In addition, you could then further optimize by balancing the number of write operations vs. the length of a single write operation. A way to do this would be to delete cache entries not directly, but via a delete queue: You put entries for deleting into a queue, and process the queue at defined intervals. You can then fine-tune how often the queue is processed.
Hi, I'm coming on this cause I think in fact that using a delete queue is a wrong pattern that can bring to segmentation fault.
Indeed, if I had an Element in my Cache, then the system stop using it so its destruction imply the removal in the Cache. If the removal is not immediate and that the key is stored in a delete queue, imagine that for any reason, the system recreate the same Element (same hash code, same equal comparison) then we should have again the same weak key back in the cache. If the delete queue kicks in, then it is the new shared Element that will be destroyed and there still may have reference to it in the system that will crash when we will use them.
Do you see the point?When you check whether an element is included in the cache or not, you would also have to consider the delete queue.
To put it more clearly: The delete queue only marks items as "can be deleted anytime". The actual deletion has to be locked against methods accessing the cache. Then you should not have any issues.
EDIT: Oh, and if you try to obtain an item from cache that's already in the delete queue, it has to be removed from the delete queue. -
@Asperamanca
I've done some unit testing of the case I've described above and in fact it is safe due to the fact that the key is a new allocated object, so the key of a recreated object is not the same than the one created previously and that can be in the deleted queue. It is the same by value but a different object.ObjPtr CentralizingWeakCache::getCentralizedValue(const ObjPtr &sharedPtr) { ObjPtr centralizedValue; KeyPtr key(new WeakCacheKey(sharedPtr)); QWriteLocker lockCache(&_secureCache); ObjWeakPtr cachedWeakPtr = _cache.value(key, ObjWeakPtr()); if (!cachedWeakPtr.isNull()) centralizedValue = cachedWeakPtr.toStrongRef(); if (centralizedValue.isNull()) { centralizedValue = sharedPtr; centralizedValue->setCacheKey(key); #ifdef __DEBUG_TRACE__ qDebug() << "[CentralizingWeakCache::getCentralizedValue] adding new value in cache : " << centralizedValue->str(); #endif _cache.insert(key, centralizedValue.toWeakRef()); } #ifdef __DEBUG_TRACE__ else qDebug() << "[CentralizingWeakCache::getCentralizedValue] getting centralized value for : " << centralizedValue->str(); #endif return centralizedValue; }
So the key is unique (pointer) but this test in the cache is done by value. This means that if an entry is already in the cache we get it and the new key will be destroyed.
ObjWeakPtr cachedWeakPtr = _cache.value(key, ObjWeakPtr());
I could do like you said and check also the deleted queue when I query an Object. This would avoid to have an Object both in the cache and in the delete queue but I'm not sure it would be a good choice in term of performance and anyway if this use case would happen often. I think it would be more efficient to allow this duplication behaviour, worst case scenario it is the size of the deleting queue (that for now I have at 1024). In comparison, the _cache may hold 5 millions entries.