Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?
-
@kshegunov said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
To be honest me too, a bit.
well no hard feelings, but it would be great to converge at some point ;)
Fine, I misunderstood, but do you think that a map of weak pointers to heap allocated objects that are created as shared pointers is better than just QSet<Element>?
I'm having millions of Elements. Using a hash should be far more efficient. wouldn't it?
in fact you make me doubt cause the hash function is quite complex (includes a log10, and to iterate all my property list with some multiplications and unary operations...)
but this is done only once and I don't have much collision (< 0.5%)Well, I did a small test case for you, just to illustrate how contention over a global object eats up the CPU time. Here it goes:
First, I'm not interested in CPU time. I mean that is not the first concern, the first concern is the memory usage. (of course I'll then take the fastest option)
I've tuned your example that is kind of cheating on the figures as it doesn't reflect my use case.
I've also added a Cache (by value) in the CopyThread:
the .h#ifndef CACHETHREAD_H #define CACHETHREAD_H #include <QHash> #include <QMap> #include <QThread> #include <QReadWriteLock> #include <QRandomGenerator> #include <QSharedData> class Element : QSharedData { public: Element(); Element(const Element &) = default; Element(Element &&) = default; Element & operator =(const Element &other) {mass = other.mass; properties = other.properties; return *this;} public: double mass; QMap<ushort, double> properties; }; inline Element::Element() : mass(0), properties(QMap<ushort, double>({ {0,0.}, {1,0.}, {2,0.}, {3,0.}, {4,0.}, {5,0.}, {6,0.}, {7,0.}, {8,0.}, {9,0.} })) { } extern const uint iterations; class CacheThread : public QThread { public: static QHash<uint, Element *> cache; static QReadWriteLock lock; QRandomGenerator idgen; CacheThread(); uint processedItems; static QAtomicInteger<uint> cached; void run() override; }; inline CacheThread::CacheThread() : QThread(), processedItems(0) { } class CopyThread : public QThread { public: static QHash<uint, Element> cache; static QReadWriteLock lock; static QAtomicInteger<uint> cached; CopyThread(); QRandomGenerator idgen; uint processedItems; void run() override; }; inline CopyThread::CopyThread() : QThread(), processedItems(0) { } #endif // CACHETHREAD_H
the cpp
#include "cachethread.h" #include <QDebug> QHash<uint, Element *> CacheThread::cache; QReadWriteLock CacheThread::lock; const uint iterations = 5000000; QAtomicInteger<uint> CacheThread::cached = 0; void CacheThread::run() { qDebug() << QThread::currentThreadId(); while (processedItems < iterations) { // Generate a key to check if exists uint id = idgen.bounded(0u, std::numeric_limits<uint>::max()); // Our local element that we are going to use for comparison Element *newElement = new Element(); // Read and try to find in hash lock.lockForRead(); QHash<uint, Element *>::Iterator iterator = cache.find(id); if (iterator != cache.end()) { lock.unlock(); // We have found it. newElement = iterator.value(); processedItems++; cached++; // Here I'm supposed to do something with myElement, return it to main Thread continue; } // Not found, lock for writing to insert into the hash lock.unlock(); processedItems++; lock.lockForWrite(); cache.insert(id, newElement); lock.unlock(); } } QHash<uint, Element> CopyThread::cache; QReadWriteLock CopyThread::lock; QAtomicInteger<uint> CopyThread::cached = 0; void CopyThread::run() { qDebug() << QThread::currentThreadId(); while (processedItems < iterations) { // Generate a key (due to symmetry with CacheThread) uint id = idgen.bounded(0u, std::numeric_limits<uint>::max()); // Create one element object Element myElement; // Read and try to find in hash lock.lockForRead(); QHash<uint, Element>::Iterator iterator = cache.find(id); if (iterator != cache.end()) { lock.unlock(); // We have found it. myElement = iterator.value(); processedItems++; cached++; // Here I'm supposed to do something with myElement, return it to main Thread // I want to use the same QMap<ushort, double> properties all the time continue; } // Not found, lock for writing to insert into the hash lock.unlock(); processedItems++; lock.lockForWrite(); cache.insert(id, myElement); lock.unlock(); } }
In release mode, the Copy one is just a little bit better but it is just a little bit worst in debug mode.... Any idea why?
in release mode:0x7fb47a725700 0x7fb479723700 0x7fb478f22700 0x7fb479f24700 Threads with caching (0.750145 of 4997101): 22217 0x7fb479723700 0x7fb479f24700 0x7fb478f22700 0x7fb47a725700 Threads with copy: 20201
in debug mode
0x7f3288781700 0x7f3287f80700 0x7f3286f7e700 0x7f328777f700 Threads with caching (0.750145 of 4997101): 25873 0x7f3288781700 0x7f3286f7e700 0x7f328777f700 0x7f3287f80700 Threads with copy: 26774
The reason I've added back a Cache in the CopyThread is because the local myElement created will be given back to the main thread and stored in the queue of a MainObject. If I don't use overwrite the local myElement by the one of the cache, then the 100Bytes of the properties are not shared. They are duplicated. We get back on the calculation I made before (the 4GB...)
So now, even it is a bit more efficient in release mode to store values than pointers I think I loose the opportunity to purge my cache in an "easy" way (or even totally) when a Element is not used anymore by any MainObject.
I've explained this point before.
I don't have statistic on how often this would happen, I don't want to loose time hacking java (I don't like java either...) but my guess is that it will happen.Then it bugs me, why did you choose a binary tree (i.e. QMap) to keep the properties instead of a vector, you just have more overhead. And, since your object is already tiny, what's wrong of caching only the properties and constructing it out of the property map?
I use a QMap because it is more convenient. I'll have in average 10 values on an enum key of 500 items. I like it to have it sorted and being able to test if a key is included. As you said, as it is tiny, why bother... I just want to make sure it will be shared when 2 are the same. Again this is why I've added a Cache also in your CopyThread.
Then you don't have a care in the world, just making bitwise copies (as the map or vector or whatever would already be constant) ...
I don't get that
Says who? What prevents you from creating them on the stack (look at the test case)?
well at one point the local object will end up in a Hash or in a Queue, so under the hood there will be a copy in the heap. What the difference to do that at the beginning or at the end? If it is removed from a Queue by value and put in another one, I may have to allocations no? so better do it first by myself and then all the Containers would point on the same address in the heap.
-
@Asperamanca said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
About this: Your current approach using a mutex to protect your cache-hash could become a bottleneck. I would consider splitting operations on the cache cleanly into read and write operations, and use QReadLocker and QWriteLocker, since based on your numbers, 9 out of 10 times an object that you need should already exist in cache (which would make the access a read-only thing).
In addition, you could then further optimize by balancing the number of write operations vs. the length of a single write operation. A way to do this would be to delete cache entries not directly, but via a delete queue: You put entries for deleting into a queue, and process the queue at defined intervals. You can then fine-tune how often the queue is processed.I've implemented this. I'm using a C array for the queue of entries I'll have to delete. I think it is the most performant way. I preallocate it with a fix size the I'm keeping an index that come back at the beginning once the size reached and the clean-up done. Kind of a circular buffer.
I've also updated the main.cpp to illustrate a little bit more what the goal is about : being able to merge some identical object created in distinct heap zone to a single one hold by a "Cache" entity. It also shows the auto deleting features of the cache when nobody is using the values anymore.
Cheers again. Let me know if you have any comments. The code is here.
-
@mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
I'm having millions of Elements. Using a hash should be far more efficient. wouldn't it?
Yes, and
QSet
is already a hash.in fact you make me doubt cause the hash function is quite complex (includes a log10, and to iterate all my property list with some multiplications and unary operations...) but this is done only once and I don't have much collision (< 0.5%)
Fine, so the thing you want to share is not the elements per se, but the property lists, correct. Then do so!
Firstly, let's redesign a bit. You want the properties in a vector, not in a map so the memory's contiguous and you can iterate over it quickly. Ordering is an afterthough that can be done once, after the property list is initialized:enum PropertyType { }; typedef QPair<PropertyType, double> ElementProperty; typedef QVector<ElementProperty> ElementPropertyList; typedef QSet<ElementPropertyList> ElementPropertyListCache;
Now, after you get the property list from fortran or wherever, you sort it, and then look it up in the set:
ElementPropertyList plist = ...; // Wherever it comes from std::sort(plist.begin(), plist.end(), [] (const ElementProperty & a, const ElementProperty & b) -> bool { return a.first < b.first; });
Then, you can look up the property list in the set to provide the data sharing you require:
// Those are global static ElementPropertyListCache propertyListCache; static QReadWriteLock propertyListLock; // This goes where the shared property list is to be retrieved propertyListLock.lockForReading(); ElementPropertyListCache::ConstIterator i = propertyListCache.constFind(plist); // Try to retrieve the property list bool found = (i != propertyListCache.constEnd()); propertyListLock.unlock(); if (!found) { // You didn't find it: relock for writing and add it to the cache propertyListLock.lockForWrite(); i = propertyListCache.insert(plist); // Update the iterator too propertyListLock.unlock(); } double mass = ...; // The mass wherever it comes from Element myNewAndLightElement(mass, *i); // Pass the element to wherever it is needed by a value!! // No need for pointers and such as the property list is implicitly shared inside Qt
Then
Element
constructor would look like this:class Element { public: Element(double, const PropertyList &); private: double mass; const PropertyList properties; //< Ordered by design, const so no data copying can occur later, if const fails to compile remove it just make sure no calls are modifying that field. }; Element::Element(double m, const PropertyList & plist) : mass(m), properties(plist) { }
I've tuned your example that is kind of cheating on the figures as it doesn't reflect my use case.
It does what it's supposed to, it shows how the thread serialization affects performance.
I use a QMap because it is more convenient. I'll have in average 10 values on an enum key of 500 items. I like it to have it sorted and being able to test if a key is included. As you said, as it is tiny, why bother... I just want to make sure it will be shared when 2 are the same. Again this is why I've added a Cache also in your CopyThread.
More convenient doesn't mean better or faster, or smaller.
I don't get that
See above, all from the point the object is constructed initially can then be done with simple lockless copies by passing it by value ...
well at one point the local object will end up in a Hash or in a Queue, so under the hood there will be a copy in the heap. What the difference to do that at the beginning or at the end? If it is removed from a Queue by value and put in another one, I may have to allocations no? so better do it first by myself and then all the Containers would point on the same address in the heap.
Humongous. The memory allocation for the hash is done in one single step for all elements, not in million calls to the runtime! Also using the stack frees you from any obligations to remember or think of when to call
delete
, which is one of the points to wrap pointers to heap objects in stack objects (i.e. the beloved smart pointers). -
@kshegunov said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
Yes, and QSet is already a hash.
well true, I'm used to std::set that is ordered...
but anyway, I would still have an issue to be able to know which Element is not used anymore by any MainObjects.
check my code, I'm talking about this feature that is important in the use caseSharedObject::~SharedObject() { qDebug() << "[SharedObject::~SharedObject] destroying " << _value; if (_cache) _cache->remove(_key); } void CentralizingWeakCache::remove(const QSharedPointer<WeakCacheKey> &key) { _secureOsoleteStack.lockForWrite(); _obsoleteKeys[_nextObsoleteKeyIndex++] = key; if (_nextObsoleteKeyIndex == _sizeMaxOfObsoleteStack) { _secureOsoleteStack.unlock(); _cleanUpCache(); } else _secureOsoleteStack.unlock(); qDebug() << "[CentralizingWeakCache::handleSharedObjectDestruction] schedule removal"; } void CentralizingWeakCache::_cleanUpCache() { qDebug() << "[CentralizingWeakCache::_cleanUpCache] cleanUp: we've " << _sizeMaxOfObsoleteStack << " obsolete keys..."; QWriteLocker lockCache(&_secureCache); QWriteLocker lockStack(&_secureOsoleteStack); // write lock cause we will "unlink" its data (_nextObsoleteKeyIndex back to 0) for (ushort idx = 0; idx < _sizeMaxOfObsoleteStack ; ++idx) _cache.remove(_obsoleteKeys[idx]); _nextObsoleteKeyIndex = 0; }
@kshegunov said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
Fine, so the thing you want to share is not the elements per se, but the property lists, correct. Then do so!
Firstly, let's redesign a bit. You want the properties in a vector, not in a map so the memory's contiguous and you can iterate over it quickly. Ordering is an afterthough that can be done once, after the property list is initialized:Those are small improvements, that I could consider only if I'm not happy with the overall performances...
But at the end, I would have to re-create a Map on top of my Set so I don't see the point.
I need to be able to query any values of my enum to see if it is included in the properties. I want to be able to access the values by giving the enum key...@kshegunov said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
See above, all from the point the object is constructed initially can then be done with simple lockless copies by passing it by value ...
why you say lockless, you do use a QReadWriteLock exactly like I'm doing in my solution.
Then see above, passing by value make me loose the information of having a dangling pointer, which can be tested via a WeakPointer. By value I will never have the equivalent of a null WeakPointer...@kshegunov said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
It does what it's supposed to, it shows how the thread serialization affects performance.
well your second post maybe, but the one the comparison of my pseudo use case using TreadCache and ThreadCopy is not fair as the ThreadCopy didn't use the cache and thus the QReadWriteLock...
@kshegunov said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
Humongous. The memory allocation for the hash is done in one single step for all elements, not in million calls to the runtime! Also using the stack frees you from any obligations to remember or think of when to call delete, which is one of the points to wrap pointers to heap objects in stack objects (i.e. the beloved smart pointers).
well sorry mate but you're wrong...
This is qhash code for insertion:template <class Key, class T> Q_INLINE_TEMPLATE typename QHash<Key, T>::iterator QHash<Key, T>::insert(const Key &akey, const T &avalue) { detach(); uint h; Node **node = findNode(akey, &h); if (*node == e) { if (d->willGrow()) node = findNode(akey, h); return iterator(createNode(h, akey, avalue, node)); } if (!std::is_same<T, QHashDummyValue>::value) (*node)->value = avalue; return iterator(*node); }
and what is doing createNode?
template <class Key, class T> Q_INLINE_TEMPLATE typename QHash<Key, T>::Node * QHash<Key, T>::createNode(uint ah, const Key &akey, const T &avalue, Node **anextNode) { Node *node = new (d->allocateNode(alignOfNode())) Node(akey, avalue, ah, *anextNode); *anextNode = node; ++d->size; return node; }
So your inserted elements goes on the heap...
-
I give up. I'll only add one small flare before I go:
@mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
Node *node = new (d->allocateNode(alignOfNode())) Node(akey, avalue, ah, *anextNode);
Do you know what a placement new is? If not, then you should google it.
-
@kshegunov
I've googled it... ':$
ok I see... yeah you skip the single allocation part, my bad...I guess what you're doing with a QVector<QPair<PropertyType, double>> would work the same with a QMap<PropertyType, double> in term of lookup in the cache if I don't want to bother right now implementing few handy methods to make the structure act like a map.
I suppose I could be able to implement it in that way and instead of trying to merge an object, directly create it using the cache and the shared data. I'm going to look more into this.
But I'm still suck for the part of cleaning cache. I need some kind of reference count on the the items of the cache so I could know when I could remove an entry that became obsolete (when it isn't used by anybody else than the cache...)
do you think this would be achievable? I don't see how without using a WeakPointer as a key...
-
Maybe I could implement myself the reference count... Use a QHash with my PropertyList as a key and a reference count as value.
then in the destructor of Element, if it is using a property list from the cache (making sure it hasn't detached) then I could release from the cache which would decrease the reference count... I can schedule my cleanUp when it drops to 0.
What do you think about this approach?
I'll test it and compare the performance against my current implementation with pointers.
I'm just a bit concern that I may have to be careful to know if an Element PropertyList has detached from the source... -
@kshegunov
mea culpa mate...
I've tried to implement some kind of solution with values only, so as I said in previous post, doing myself the reference count of the shared objects owned by my cache.
It turns out, I'm doing few more copies (I'd say 3) than with the pointer version and the performances are worst than with the pointer version.
Here is what I've done:SharedObject.h
#ifndef SHAREDOBJECT_H #define SHAREDOBJECT_H //#define __DEBUG_TRACE__ 1 #include <QSharedPointer> #include <QDebug> class CentralizingCache; class SharedObject { public: friend uint qHash(const SharedObject & cacheKey); friend class CentralizingCache; // to set the cache and the key explicit SharedObject(const QMap<ushort, double> &content = QMap<ushort, double>(), bool isShared = false, bool isCacheKey = false, uint hashCode = 0); // shallow copy (the QMap is shared but can be detached) SharedObject(const SharedObject &other); SharedObject &operator =(const SharedObject &other) ; SharedObject(SharedObject &&other); void setContent(ushort property, double value); SharedObject getUnique(); virtual ~SharedObject(); virtual bool operator==(const SharedObject &other) const; QString str() const; private: uint computeHashCode() const; SharedObject copyFromCache() const; void detach(); void increaseCacheRefCount(); void updateMass(); private: double _mass; QMap<ushort, double> _content; bool _isShared; bool _isCacheKey; mutable uint _hashCode; static CentralizingCache *sCentralizingCache; }; #endif // SHAREDOBJECT_H
The SharedObject.cpp
#include "SharedObject.h" #include "CentralizingCache.h" CentralizingCache *SharedObject::sCentralizingCache = CentralizingCache::getInstance(); SharedObject::SharedObject(const QMap<ushort, double> &content, bool isShared, bool isCacheKey, uint hashCode) :_mass(0.), _content(content), _isShared(isShared), _isCacheKey(isCacheKey), _hashCode(hashCode) { updateMass(); } SharedObject::SharedObject(const SharedObject &other) { _mass = other._mass; _content = other._content; _isShared = other._isShared; _isCacheKey = other._isCacheKey; _hashCode = other._hashCode; increaseCacheRefCount(); } SharedObject &SharedObject::operator =(const SharedObject &other) { _mass = other._mass; _content = other._content; _isShared = other._isShared; _isCacheKey = other._isCacheKey; _hashCode = other._hashCode; increaseCacheRefCount(); return *this; } SharedObject::SharedObject(SharedObject &&other) { _mass = other._mass; _content = other._content; _isShared = other._isShared; _isCacheKey = other._isCacheKey; _hashCode = other._hashCode; other._mass = 0; } void SharedObject::setContent(ushort property, double value) { auto it = _content.find(property); if (it != _content.end() ) { if (it.value() != value) { detach(); it.value() = value; updateMass(); } } else { detach(); _content.insert(property, value); updateMass(); } } SharedObject SharedObject::getUnique() { return sCentralizingCache->getCentralizedValue(*this); } SharedObject::~SharedObject() { #ifdef __DEBUG_TRACE__ qDebug() << "[SharedObject::~SharedObject] destroying " << str(); #endif detach(); } #include <cmath> bool SharedObject::operator==(const SharedObject &other) const { if (_mass != other._mass) return false; if (_content.keys() != other._content.keys()) return false; for (auto it = _content.cbegin(), itEnd = _content.cend(), itOther = other._content.cbegin(); it != itEnd; ++it, ++itOther) { if (std::fabs(it.value() - itOther.value()) > 1e-5) return false; } return true; } QString SharedObject::str() const { QString str = QString("mass: %1, isShared: %2, isCacheKey: %3, (hashCode: %4) content: ").arg( _mass).arg(_isShared).arg(_isCacheKey).arg(_hashCode); for (auto it = _content.cbegin(), itEnd = _content.cend() ; it != itEnd; ++it) str += QString(" <%1 : %2>").arg(it.key()).arg(it.value()); return str; } uint SharedObject::computeHashCode() const { _hashCode = _mass; return _hashCode; } SharedObject SharedObject::copyFromCache() const { return SharedObject(_content, _isShared, false, _hashCode); } void SharedObject::detach() { if (_isShared && !_isCacheKey) { sCentralizingCache->remove(*this); _isShared = false; } } void SharedObject::increaseCacheRefCount() { if (_isShared && !_isCacheKey) sCentralizingCache->incRefCount(*this); } void SharedObject::updateMass() { for (double mass : _content.values()) _mass += mass; }
CentralizingCache.h
#ifndef CENTRALIZINGCACHE_H #define CENTRALIZINGCACHE_H #include "Singleton.h" #include <QHash> #include <QReadWriteLock> #include "SharedObject.h" inline uint qHash(const SharedObject & elem) { if (elem._isShared) return elem._hashCode; else return elem.computeHashCode(); } class CentralizingCache : public Singleton<CentralizingCache> { public: friend class Singleton<CentralizingCache>; friend class SharedObject; // to use incRefCount CentralizingCache(); SharedObject getCentralizedValue(SharedObject &srcElem); void remove(const SharedObject &elem); inline int size() const; void dump(const QString &msg = ""); private: void incRefCount(const SharedObject &elem); private: QHash<SharedObject, uint> _cache; mutable QReadWriteLock _secureCache; }; int CentralizingCache::size() const { QReadLocker lockCache(&_secureCache); return _cache.size(); } #endif // CENTRALIZINGCACHE_H
CentralizingCache.cpp
#include "CentralizingCache.h" CentralizingCache::CentralizingCache() :Singleton<CentralizingCache>(), _cache(), _secureCache() { } SharedObject CentralizingCache::getCentralizedValue(SharedObject &srcElem) { QWriteLocker lockCache(&_secureCache); auto it = _cache.find(srcElem); if (it != _cache.end()) { uint &val = it.value(); ++val; #ifdef __DEBUG_TRACE__ qDebug() << "[CentralizingCache::getCentralizedValue] returning shared value for " << srcElem.str() << ", nb shared = " << val; #endif return it.key().copyFromCache(); } else { srcElem._isShared = true; SharedObject cachedObject(srcElem); cachedObject._isCacheKey = true; _cache.insert(cachedObject, 0); // the increments are done in the copy operators #ifdef __DEBUG_TRACE__ qDebug() << "[CentralizingCache::getCentralizedValue] inserting new shared value for " << cachedObject.str(); #endif return srcElem; } } void CentralizingCache::remove(const SharedObject &elem) { QWriteLocker lockCache(&_secureCache); auto it = _cache.find(elem); if (it != _cache.end()) { uint &val = it.value(); --val; #ifdef __DEBUG_TRACE__ qDebug() << "[CentralizingCache::remove] releasing shared value for " << elem.str() << ", nb shared = " << val; #endif if (val == 0) { #ifdef __DEBUG_TRACE__ qDebug() << "[CentralizingCache::remove] removing shared value from cache for " << elem.str(); #endif _cache.erase(it); } } } void CentralizingCache::dump(const QString &msg) { qDebug() << "[CentralizingCache::dump] " << msg << " Size = " << _cache.size(); for (auto it = _cache.cbegin(), itEnd = _cache.cend() ; it != itEnd; ++it) qDebug() << "\t" << QString("- <nbShared: %1> <elem: %2>").arg(it.value()).arg(it.key().str()); } void CentralizingCache::incRefCount(const SharedObject &elem) { auto it = _cache.find(elem); if (it != _cache.end()) { uint &val = it.value(); ++val; #ifdef __DEBUG_TRACE__ qDebug() << "[CentralizingCache::incRefCount] increment refCount for " << elem.str(); #endif } }
And finally the test (main.cpp)
#include <QCoreApplication> #include "SharedObject.h" #include "CentralizingCache.h" #include <QDebug> #include <QElapsedTimer> static CentralizingCache *cache = CentralizingCache::getInstance(); void dumpElements(const QVector<SharedObject> &elems, const char * vectorName) { #ifdef __DEBUG_TRACE__ qDebug() << "Dumping elements of " << vectorName; for (auto it = elems.cbegin(); it != elems.cend() ; ++it) qDebug() << "\t - " << it->str(); CentralizingCache::getInstance()->dump(); #endif } void test2(uint nbObjects) { qDebug() << "\n\ntest2 with " << nbObjects << " (items in cache: " << cache->size() << ")"; QElapsedTimer cacheTimer; cacheTimer.start(); QVector<SharedObject> firstObjects; firstObjects.reserve(nbObjects); for (uint i = 0 ; i < nbObjects ; ++i) { firstObjects.append(SharedObject({ {i, i}, {i+1, i+1}, {i+2, i+2}})); } dumpElements(firstObjects, "firstObjects"); qDebug() << "Let's make them unique >>>>>>>>>>>>>>"; for (uint i = 0 ; i < nbObjects ; ++i) { SharedObject &obj = firstObjects[i]; #ifdef __DEBUG_TRACE__ qDebug() << "Make unique elem #" << i << ": " << obj.str(); #endif firstObjects[i] = obj.getUnique(); } qDebug() << "Let's make them unique <<<<<<<<<<<<<<<"; dumpElements(firstObjects, "firstObjects"); qDebug() << "Do some shared copies >>>>>>>>>>>>>>"; QVector<SharedObject> sharedObjects; sharedObjects.reserve(nbObjects); for (uint i = 0 ; i < nbObjects ; ++i) { sharedObjects.append(cache->getCentralizedValue(firstObjects[i])); } qDebug() << "Do some shared copies <<<<<<<<<<<<<<<"; dumpElements(sharedObjects, "sharedObjects"); SharedObject &detachedObject = firstObjects[nbObjects-1]; detachedObject.setContent(42, 42); dumpElements(firstObjects, "\n\nDetaching last firstObjects"); firstObjects[nbObjects-1] = detachedObject.getUnique(); dumpElements(firstObjects, "\nMaking unique detached object"); qDebug() << "Destroying last half of firstObjects >>>>>>>>>>>>>>"; for (uint i = 0 ; i < nbObjects/2 ; ++i) firstObjects.pop_back(); qDebug() << "Destroying last half of firstObjects <<<<<<<<<<<<<<<"; dumpElements(firstObjects, ""); qDebug() << "Destroying all shared copies >>>>>>>>>>>>>>"; sharedObjects.clear(); qDebug() << "Destroying all shared copies <<<<<<<<<<<<<<<"; #ifdef __DEBUG_TRACE__ cache->dump(); #endif qDebug() << "\nExecution Time using CentralizingCache with " << nbObjects << " items: " << cacheTimer.elapsed(); } int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); test2(500); test2(5000); test2(50000); test2(500000); test2(5000000); return a.exec(); }
So here are the results in Release mode:
Execution Time using CentralizingCache with 500 items: 1 Execution Time using CentralizingCache with 5000 items: 8 Execution Time using CentralizingCache with 50000 items: 72 Execution Time using CentralizingCache with 500000 items: 705 Execution Time using CentralizingCache with 5000000 items: 7244
Same test with the Pointer version (I've updated the code to use the same test, it's available here)
Execution Time using CentralizingWeakCache with 500 items: 1 Execution Time using CentralizingWeakCache with 5000 items: 6 Execution Time using CentralizingWeakCache with 50000 items: 53 Execution Time using CentralizingWeakCache with 500000 items: 481 Execution Time using CentralizingWeakCache with 5000000 items: 4979
So with my tests, I'm having better performances using the Pointer version.
What am I doing wrong?
The test is reflecting the usage I will have of the SharedObject. (Element)PS: and something is missing in the value version, I should use Atomics to hold the reference count instead of simple uint. I guess this would drop even a bit more the performances.
PS2: one thing that would be great is if we could access directly the reference count on a QSharedData but I suppose it is hidden...
-
There's something I'm missing, why add
SharedObject::setContent
to the class. You said that after setting it up the property list is not going to change. Did I misunderstand?I mean this function causes a detach in the property list so you are actually working with a separate copy when you do that, not with the one that you get from the cache.
-
Well, the property list is constant for the Element (SharedObject) that are in the cache but they are not constant in the normal use case. Nobody should ever modify the entries in the cache. We can just share them more, add new constant entries or delete some if nobody use them anymore. So yeah they are constant but only for those "special" ones.
Basically some MainObjects are creating Elements, doing some calculations on them, so modifying the properties and at one point they want to save the "final" state. That's this final state that I want to share and put in my Cache. All the intermediate steps were just "transitional" (either on a single Element or using intermediate ones that will be deleted)
It could happen that a MainObject starts with a shared Element, I know that I will detach from the Shared value, that is why I've coded a detach function in Element that is called in setContent.
I've used this behaviour in my main.cpp:SharedObject &detachedObject = firstObjects[nbObjects-1]; detachedObject.setContent(42, 42); dumpElements(firstObjects, "\n\nDetaching last firstObjects"); firstObjects[nbObjects-1] = detachedObject.getUnique(); dumpElements(firstObjects, "\nMaking unique detached object");
So detachedObject is not a Cached value as soon as I call setContent.
in setContent I do a detach to inform the cache to decrease the reference count:
void SharedObject::detach() { if (_isShared && !_isCacheKey) { sCentralizingCache->remove(*this); _isShared = false; } }
I'm free later one to reinsert the new value in the cache as done by the line
firstObjects[nbObjects-1] = detachedObject.getUnique();
PS: this behaviour is the pointer version requires a clone of the Element before any modification. The same code of the cpp file looks like this:
ObjPtr &detachedObject = firstObjects[nbObjects-1]; firstObjects[nbObjects-1] = detachedObject->clone(); detachedObject->setContent(42, 42); dumpElements(firstObjects, "\n\nDetaching last firstObjects"); firstObjects[nbObjects-1] = cache->getCentralizedValue(firstObjects[nbObjects-1]); dumpElements(firstObjects, "\nMaking unique detached object");
So I've to be careful to clone before any modification. But in the other hand I don't have to manage the detach in the setContent neither to care about the reference count of the sharing in my Cache. The smart pointers will do the job for me because the Element itself will be destroyed when nobody use it anymore and my cache will get informed at this point. That's the handy part of using a WeakHashTable ;)
-
@mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
one thing that would be great is if we could access directly the reference count on a QSharedData but I suppose it is hidden...
It's not hidden it's public:
QSharedData sharedObj; int counter = sharedObj.ref.load();
but it's a bad idea to touch it - it's undocumented (thus subject to change without notice). For the containers (
QVector
,QMap
, etc.) it is in the private class, so it's inaccessible there.Something more that bothers me - if you're going to copy out and change the data what's really the point of caching it to begin with, it would give you only minute gain, if any? Or to rephrase, I'd start by doing a very quick test case where I have a real-world example of how much objects of a given type are in use at any one time (not in the future, at one moment) - no caching at all. Could you do that?
-
It's not hidden it's public:
QSharedData sharedObj;
int counter = sharedObj.ref.load();well good to know, it could avoid me to re-create a reference count...
I won't touch it but at least I could know when the reference count is 1 and thus it would mean that the only instance existing is the one in my cache which means nobody else is using it and so I could delete the entry from the cache.
As you said it isn't documented so I thought it was totally private following the pImpl pattern. This should definitely be documented! it could be really useful in my kind of use case.I'd start by doing a very quick test case where I have a real-world example of how much objects of a given type are in use at any one time (not in the future, at one moment) - no caching at all. Could you do that?
I've done it. Here are the numbers for one simulation :
[MB_TRACE] Nb elem in cache : 5059084 (reused: 39457190, nb keys: 5047742
So there is no doubt, I'm factorizing 45 millions distinct copy in just 5 millions in a cache. (this is where the 3GB were coming from if I'm not sharing the map of properties)
(And a user would keep the app opened and launch several simulation so the sharing factor could be multiplied)Those (the instance in the cache) are the "important" instances that I store. The other one that detach are just intermediate steps so they don't matter. I just don't want to bother to make 2 distinct classes and have to copy from the intermediate class to a final (const) one. It is just easier to allow the flexibility of modification and thus to detach from the cache.
Anyway, if we just go on this kind of implementation, do you see anything wrong or inefficient with the code I've posted? cause at the end, even if I save unique allocation for each instance (going up to 5 millions), I've extra copies with the value version and the perf in my use case are better with the pointer version...
I don't think that relying on the QSharedData ref count would change anything significant in the implementation of my cache as a QHash as I didn't protect the ref count I've implemented with atomics.
-
@Asperamanca said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
In addition, you could then further optimize by balancing the number of write operations vs. the length of a single write operation. A way to do this would be to delete cache entries not directly, but via a delete queue: You put entries for deleting into a queue, and process the queue at defined intervals. You can then fine-tune how often the queue is processed.
Hi, I'm coming on this cause I think in fact that using a delete queue is a wrong pattern that can bring to segmentation fault.
Indeed, if I had an Element in my Cache, then the system stop using it so its destruction imply the removal in the Cache. If the removal is not immediate and that the key is stored in a delete queue, imagine that for any reason, the system recreate the same Element (same hash code, same equal comparison) then we should have again the same weak key back in the cache. If the delete queue kicks in, then it is the new shared Element that will be destroyed and there still may have reference to it in the system that will crash when we will use them.
Do you see the point? -
@mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
@Asperamanca said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?:
In addition, you could then further optimize by balancing the number of write operations vs. the length of a single write operation. A way to do this would be to delete cache entries not directly, but via a delete queue: You put entries for deleting into a queue, and process the queue at defined intervals. You can then fine-tune how often the queue is processed.
Hi, I'm coming on this cause I think in fact that using a delete queue is a wrong pattern that can bring to segmentation fault.
Indeed, if I had an Element in my Cache, then the system stop using it so its destruction imply the removal in the Cache. If the removal is not immediate and that the key is stored in a delete queue, imagine that for any reason, the system recreate the same Element (same hash code, same equal comparison) then we should have again the same weak key back in the cache. If the delete queue kicks in, then it is the new shared Element that will be destroyed and there still may have reference to it in the system that will crash when we will use them.
Do you see the point?When you check whether an element is included in the cache or not, you would also have to consider the delete queue.
To put it more clearly: The delete queue only marks items as "can be deleted anytime". The actual deletion has to be locked against methods accessing the cache. Then you should not have any issues.
EDIT: Oh, and if you try to obtain an item from cache that's already in the delete queue, it has to be removed from the delete queue. -
@Asperamanca
I've done some unit testing of the case I've described above and in fact it is safe due to the fact that the key is a new allocated object, so the key of a recreated object is not the same than the one created previously and that can be in the deleted queue. It is the same by value but a different object.ObjPtr CentralizingWeakCache::getCentralizedValue(const ObjPtr &sharedPtr) { ObjPtr centralizedValue; KeyPtr key(new WeakCacheKey(sharedPtr)); QWriteLocker lockCache(&_secureCache); ObjWeakPtr cachedWeakPtr = _cache.value(key, ObjWeakPtr()); if (!cachedWeakPtr.isNull()) centralizedValue = cachedWeakPtr.toStrongRef(); if (centralizedValue.isNull()) { centralizedValue = sharedPtr; centralizedValue->setCacheKey(key); #ifdef __DEBUG_TRACE__ qDebug() << "[CentralizingWeakCache::getCentralizedValue] adding new value in cache : " << centralizedValue->str(); #endif _cache.insert(key, centralizedValue.toWeakRef()); } #ifdef __DEBUG_TRACE__ else qDebug() << "[CentralizingWeakCache::getCentralizedValue] getting centralized value for : " << centralizedValue->str(); #endif return centralizedValue; }
So the key is unique (pointer) but this test in the cache is done by value. This means that if an entry is already in the cache we get it and the new key will be destroyed.
ObjWeakPtr cachedWeakPtr = _cache.value(key, ObjWeakPtr());
I could do like you said and check also the deleted queue when I query an Object. This would avoid to have an Object both in the cache and in the delete queue but I'm not sure it would be a good choice in term of performance and anyway if this use case would happen often. I think it would be more efficient to allow this duplication behaviour, worst case scenario it is the size of the deleting queue (that for now I have at 1024). In comparison, the _cache may hold 5 millions entries.