QFile access and file deleted
-
Hello
I'm writing a sort of data logging program. I have a class I instantiate which appends a byte array to a file. I wanted the file to be automatically closed after it has been idle for some time. I implemented it this way:class DatabaseFileWTimeout : public QObject { Q_OBJECT public: explicit DatabaseFileWTimeout(QString filename, QObject *parent = 0); ~DatabaseFileWTimeout(); bool AppendData(QByteArray data); bool isValid() { return _isValid; } signals: void DeleteMe(QString); private slots: void timeout(); private: bool _isValid; QFile _file; QTimer _timer; }; DatabaseFileWTimeout::DatabaseFileWTimeout(QString filename, QObject *parent) : QObject(parent), _file(filename, this), _timer(this) { _isValid = false; if (!_file.open(QIODevice::Append)) return; DebugManager::PrintLn("Started using file " + filename); QObject::connect(&_timer, &QTimer::timeout, this, &DatabaseFileWTimeout::timeout); _timer.setSingleShot(true); _timer.start(ServerConfig::Instance->FileOpenTimeoutMs()); _isValid = true; } DatabaseFileWTimeout::~DatabaseFileWTimeout() { DebugManager::PrintLn("Done using file " + _file.fileName()); } bool DatabaseFileWTimeout::AppendData(QByteArray data) { if (!_file.isOpen()) return false; if (_file.write(data) != data.count()) return false; _timer.start(ServerConfig::Instance->FileOpenTimeoutMs()); return true; } void DatabaseFileWTimeout::timeout() { emit DeleteMe(_file.fileName()); }
The "DeleteMe" signal is connected to a slot in the parent which actually deletes the object, thus closing the file.
Now, the timer works as expected. The destructor is called correctly.
The problem is that I thought that the open file was "locked" by the application, and so until it was open every write operation from another program would fail.
Moreover, if I manually delete the file before it was closed, the program is fine and doesn't detect this; the write operation is successful, but no data is written to the disk.Is this the expected behavior? How can I "block" a file until it is closed?
My OS, for now, is Linux x64 (Xubuntu 16.04). Does switching to Windows change the behavior?EDIT:
I managed to test it under Windows. Using Windows 8.1 x64 with MinGW 5.3.0 the file cannot be removed (the OS denies this). I managed to edit the file, but the edits are discarded when the program flushes the edits to the file (when it closes the file).
The only problem is that the info are not flushed when I ctrl-c to close the program. For this reason, I added after the write phase a _file.flush() instruction.
Now the behavior is more or less what I wanted on windows (when the program opens the file you can't manually delete it). What about linux? Is there a way to get the same behavior? -
Hi,
You can find some information about the linux situation here.
You could maybe use a QFileSystemWatcher.
Hope it helps
-
@fra87
Hi,The problem is that I thought that the open file was "locked" by the application, and so until it was open every write operation from another program would fail.
Only on windows. On *nix there's only advisory file locking that may or may not be respected.
Moreover, if I manually delete the file before it was closed, the program is fine and doesn't detect this; the write operation is successful, but no data is written to the disk.
Yes, this is normal. Everything on
ext*
goes through the journal and files are not deleted until write operations have been flushed and all links to the files (file descriptors) have been closed. You can also look at @SGaist's link for more extensive explanations.How can I "block" a file until it is closed?
You can't (see above).
The only problem is that the info are not flushed when I ctrl-c to close the program.
Ctrl+C generates the interrupt signal (
SIGINT
), which you have to catch manually!
See the standard C library on posix signals.Is there a way to get the same behavior?
Not really, no.
Kind regards.