My question is this: what is the best way (or at least an effective way) to write to a file from multiple processes?
The best way is... don't do it!
This really seems a sort of log (appending). I would just let every process write its own file and then merge them when needed. This is the common approach at least, and here it is the rationale.
Any kind of intra-process locking is not going to work. Open files have buffering at OS level, even after being closed on some OSes (windows).
You cannot perform file locking, if you want a portable solution ("I want this to run on any platform"): you are going to meet even possible performance penalties/undefined behavior depending on the filesystem being used (eg: samba, NFS).
Writing concurrently and reliably to a single file is in fact a system-dependent activity, today.
I don't mean that it is not possible - DB engines and other applications do it reliably, but it's a customized operation.
As a good alternative, you can let one process act as a collector (as proposed by Gem Taylor), all the rest as producers, but this is not going to be a reliable alternative: logs need to get to disk "simply": if a bug can let the logs not to be written, the log purpose is going to be lost.
However you can think to use this approach, decoupling the processes and letting the messages between them to be exchanged reliably and efficiently: if this is the case you can think to use a messaging solution like RabbitMQ.
In this case all the processes publish their "lines" to the message broker, and one more process consumes such messages and write them to file.