Consider a file, that is edited with a frequency of tens, hundreds multiple processes per second. Since two or more processes can race for a file access for writing, there need a mechanism to be implemented, to make only one process to access file at one time.
As I understand, calling fopen
(or open
), until the fclose
will do the job - these functions guarantee, that only one process will access the file.
The problem is that the file is needed to be truncated after being opened, because there is a need to read it first and after that rewrite. If there will be two fopen
calls, obviously it will not guarantee cross-process safety.
This answer recommends to use freopen
function after fopen
and before fclose
calls, however, according to the Linux man-pages, freopen
documentation:
If pathname is not a null pointer, freopen() shall close any file descriptor associated with stream.
Currently, the only solution I see is to create an associated file, with the needed to be accessed, and lock it, instead of locking the needed file (although it also be locked for sure), for the period, needed file will be read, closed, truncated, wrote, closed.
Is at least such solution will guarantee safety? Any better solutions?
As users in comments explained me, fopen
does not prevent a file from being opened in another process at all. Instead, flock
should be called additionally.
So the question is can I lock the file, read, truncate, write and then unlock it, or I should use solution with associated file, I described above?
Specific summary question
- As I understand, flock accepts file descriptor need to be locked as an argument (and exactly descriptor from
open
, notfopen
). - But, as I understand, read, truncation and write file requires opening, closing and again opening the file.
- So, according to second fact, if I lock the file before I call
close
, will it be locked afterclose
call? If it automatically unlocks due toclose
it does not make any sense. If it is not unlocks, how to unlock it after all manipulations?