How to update a large model efficiently
-
"QHash<ID, int> is the index in the vector would have been much easier" -> would that even work? Then why is QPersistentModelIndex needed?
@ozcanay said in How to update a large model efficiently:
would that even work?
Why not? You're the owner of the model so you know when you modify the order and can update the hash. Don't see why you need persistent indexes here at all. They're needed for QSFPM to make sure that they're properly mapped to the new index after a sorting/filtering
-
My worry was that Qt itself under the hood could be messing up with the model indices, changing them from time to time maybe. Especially considering that I wrap my model via
QSortFilterProxyModelto filter and sort, I thought that I would needQPersistentModelIndex. -
My worry was that Qt itself under the hood could be messing up with the model indices, changing them from time to time maybe. Especially considering that I wrap my model via
QSortFilterProxyModelto filter and sort, I thought that I would needQPersistentModelIndex.@ozcanay said in How to update a large model efficiently:
My worry was that Qt itself under the hood could be messing up with the model indices, changing them from time to time maybe.
Again: you are the owner, you create the indexes, you modify the data - how should Qt be able to modify something in your structures without your knowing?
And QSortFilterProxyModel is there to NOT have the need to modify the source model.
-
Yes, you are right.
I made the following changes and it works:
QVector<Order> orders_; std::unordered_map<QString, QPair<Order, std::size_t>> order_index_map_;void OrdersModel::process(const Order& order) { const auto& order_id = order.order_id_; if(auto it = order_index_map_.find(order_id); it == order_index_map_.end()) { beginInsertRows({}, orders_.count(), orders_.count()); orders_.push_back(order); endInsertRows(); order_index_map_[order_id] = {order, orders_.size() - 1}; emit orderEntryAdded(&order); } else { const int row_index = (*it).second.second; orders_[row_index] = order; emit dataChanged(index(row_index, 0), index(row_index, column_count_)); } }However, I aim to do this processing in a different thread other than UI thread to not to freeze UI, when there are lots of updates. I want to use QtConcurrent::run for this purpose.
-
Yes, you are right.
I made the following changes and it works:
QVector<Order> orders_; std::unordered_map<QString, QPair<Order, std::size_t>> order_index_map_;void OrdersModel::process(const Order& order) { const auto& order_id = order.order_id_; if(auto it = order_index_map_.find(order_id); it == order_index_map_.end()) { beginInsertRows({}, orders_.count(), orders_.count()); orders_.push_back(order); endInsertRows(); order_index_map_[order_id] = {order, orders_.size() - 1}; emit orderEntryAdded(&order); } else { const int row_index = (*it).second.second; orders_[row_index] = order; emit dataChanged(index(row_index, 0), index(row_index, column_count_)); } }However, I aim to do this processing in a different thread other than UI thread to not to freeze UI, when there are lots of updates. I want to use QtConcurrent::run for this purpose.
@ozcanay said in How to update a large model efficiently:
However, I aim to do this processing in a different thread other than UI
This will not work.
-
Can you elaborate on why that won't work? If there is heavy processing to do on UI thread what should I do then?
@ozcanay said in How to update a large model efficiently:
Can you elaborate on why that won't work?
You must not modify any UI stuff outside the ui (main) thread. emiting dataChanged() is therefore not possible outside the main thread.
Do your calculations outside the main thread, modify the model in the main thread - when modifying a model (which is more or less just a simple copy in your case) will lock your main thread then you're doing something wrong. The only thing I can think of is a too high data rate for your incoming data - but then it's somehow useless to update the model that frequently because noone can see the changes on the ui at all. -
@ozcanay said in How to update a large model efficiently:
Can you elaborate on why that won't work?
You must not modify any UI stuff outside the ui (main) thread. emiting dataChanged() is therefore not possible outside the main thread.
Do your calculations outside the main thread, modify the model in the main thread - when modifying a model (which is more or less just a simple copy in your case) will lock your main thread then you're doing something wrong. The only thing I can think of is a too high data rate for your incoming data - but then it's somehow useless to update the model that frequently because noone can see the changes on the ui at all.@Christian-Ehrlicher
I think you are absolutely right as users won't be even able to see updates processed at this rate. So, I think that I need a way to throttle view updating. I am planning to have a std::set of processed indices, and I will use those indices to emit dataChanged signal every X seconds (or milliseconds) and reset that set at every timer firing. In theory, this should work as sometimes a single order gets updated 100 times in matter of seconds that reforces a repaint on view via dataChanged signal. -
I would do it in a similar way. Collect the updated data in a separate thread and batch-update the model every second or so. Can be even done in your model with a custom setter for the second thread to collect the data and put it in another container which gets read every second. Don't forget to use a mutex for this container access.
-
I have overriden
timerEventof the model, and usedstartTimer()to trigger the timer event every second. I created a public function named acknowledge(...) that just stores the order in a map of format <order_id, order>. If an price update to a order occurs multiple times, only the last update will be taken into consideration. Let's call this map batched_orders. In timer event callback, there is a private method called batchProcess, that basically iterates over batched_orders and processes them. This is not the most ideal solution to the problem (the best solution is what @Christian-Ehrlicher suggested), however, this seems to be doing the job for my application. CPU usage reduced dramatically. @JonB is it possible to change the solution for this post?