Help understanding writing to a fifo

New to Kakoune, trying to write some scripts. Rather than just cargo-culting, I want to understand things.

Whenever a command wants to write to a fifo, I see something like

 ( blah > fifo 2>&1 & ) > /dev/null 2>&1 < /dev/null

When I tried to simplify that to

blah > fifo 2 >&1 &

or

 ( blah > fifo 2>&1 & )

kakoune would hang, telling me that it was waiting even though the shell process had terminated.

In the shell, all of these behave the same:

> sh -c  'rm -f /tmp/f; mkfifo /tmp/f; echo foo > /tmp/f 2>&1 &'
> sh -c  'rm -f /tmp/f; mkfifo /tmp/f; (echo foo > /tmp/f 2>&1 &)'
> sh -c  'rm -f /tmp/f; mkfifo /tmp/f; (echo foo > /tmp/f 2>&1 &) > /dev/null 2>&1 < /dev/null'

They all terminate immediately and leave an orphan process.

So what distinguishes them for kakoune? I guess from the need for the redirections, it’s something about file descriptors being closed. Is that right? I couldn’t find it in the documentation anywhere.

Far from an expert, someone will correct me if I go off in a wrong direction.

  1. First of all, & backgrounds a process but doesn’t terminate it
  2. FIFO are designed for IPC, which means they are designed to be read from and written to at the same time.
  3. Due to this design, they have limited buffer sizes (based on kernel) and can block both readers and writers (on purpose, to sync).
  4. This means they are not designed to be used between two processes that are not run concurrently, it can (sometimes) work for this use case due to buffer size and luck, but isn’t the purpose.

That means that just pushing data into a FIFO without reading is sort of nonsensical.

1 Like

Thanks for the quick response.
In all three cases, the writing process is running in the background, and as you say, it is blocking. That is why each case results in an orphan process.
However, kakoune is ok with the third case but it hangs with the other two. The question is why. (And whatever the answer is, where is it documented.)

< /dev/null 

That instantly closes the input so it won’t wait for more input.

I think without it it waits, but I am not certain.

yep, from man 3 mkfifo:

Opening a FIFO for reading normally blocks until some other process opens the same FIFO for writing, and vice versa.

so the first open() call will block until the the other one arrives on the other end of the pipe.

I think without it it waits, but I am not certain

yes, Kakoune waits until all of

  • the process has terminated
  • its stdin is closed
  • its stdout and stderr are closed (unless the output is irrelevant like for <a-|>)

To round things off, and build on the explanations other people have given:

That’s exactly right.

In your shell examples, any file-descriptors that aren’t redirector are inherited from the parent shell, so they’re connected to your terminal, which (for the purposes of these tests) stays around forever. The shell considers a program to have ended when it returns an exit code, and if in the background it spawned a sub-process that’s still writing to the terminal, that’s just how things are supposed to work. Meanwhile, Kakoune does not execute programs in a terminal, it has to provide stdin/stdout/stderr itself. If the shell it executes directly returns its exit code, but the file descriptors are still open, then something is still ready to read or write, and Kakoune has to wait for it to finish.

I’m not sure it is documented anywhere, apart from the code, and threads like this. It’s more of a consequence of how a program like Kakoune must work on a POSIX-like OS, rather than any specific design choice Kakoune made, so there’s no obvious place to put it. Perhaps you could add it to the Writing Plugins wiki page.