This is an old revision of the document!


The coproc keyword

:V4:

 coproc [NAME] command [redirections]

Bash 4.0 introduced the coprocesses, a feature certainly familiar to ksh users.

coproc starts a command in the backgound setting up pipes so that you can interact with it. Optionally, the co-process can have a name NAME.

If NAME is given, the following command must be a compound command. If no NAME ist given, the command can be a simple command or a compound command.

Redirections

The redirections are normal redirections that are set after the pipe has been set up, some examples:

# redirecting stderr in the pipe
$ coproc { ls thisfiledoesntexist; read ;} 2>&1
[2] 23084
$ read -u ${COPROC[0]};printf "%s\n" "$REPLY"
ls: cannot access thisfiledoesntexist: No such file or directory

#let the output of the coprocess go to stdout
$ { coproc mycoproc { awk '{print "foo" $0;fflush()}' ;} >&3 ;} 3>&1
[2] 23092
$ echo bar >&${mycoproc[1]}
$ foobar

Here we need to save the previous file descriptor of stdout, because by the time we want to redirect the fds of the coprocess stdout has been redirected to the pipe.

Pitfalls

Avoid the command | while read subshell

The traditional KSH workaround to avoid the subshell when doing command | while read is to use a coprocess, unfortunately, it seems that bash's behaviour differ from KSH.

In KSH you would do:

ls |& #start a coprocess
while read -p file;do echo "$file";done #read its output

In bash:

#DOESN'T WORK
$ coproc ls
[1] 23232
$ while read -u ${COPROC[0]} line;do echo "$line";done
bash: read: line: invalid file descriptor specification
[1]+  Done                    coproc COPROC ls

By the time we start reading from the output of the coprocess, the file descriptor has been closed.

Buffering

In the first example, we used fflush() in the awk command, this was done on purpose, as always when you use pipes the I/O operations are buffered, let's see what happens with sed:

$ coproc sed s/^/foo/
[1] 22981
$ echo bar >&${COPROC[1]}
$ read -t 3 -u ${COPROC[0]}; (( $? >127 )) && echo "nothing read"
nothing read

Even though this example is the same as the first awk example, the read doesn't return, simply because the output is waiting in a buffer.

See this faq entry on Greg's wiki for some workarounds.

background processes

The file descriptors of the coprocesses are available to the shell where you run coproc, but they are not inherited. Here a not so meaningful illustration, suppose we want something that continuely reads the output of our coprocess and echo the result:

#NOT WORKING
$ coproc awk '{print "foo" $0;fflush()}'
[2] 23100
$ while read -u ${COPROC[0]};do echo "$REPLY";done &
[3] 23104
$ ./bash: line 243: read: 61: invalid file descriptor: Bad file
descriptor
it fails, because the descriptor is not avalaible in the subshell created by &.

A possible workaround:

#WARNING: for illustration purpose ONLY
# this is not the way to make the coprocess print its output
# to stdout, see the redirections above.
$ coproc awk '{print "foo" $0;fflush()}'
[2] 23109
$ exec 3<&${COPROC[0]}
$ while read -u 3;do echo "$REPLY";done &
[3] 23110
$ echo bar >&${COPROC[1]}
$ foobar

Here the fd 3 is inherited.

Anonymous Coprocess

First let's see an example without NAME:

$ coproc awk '{print "foo" $0;fflush()}'
[1] 22978

The command starts in the background, coproc returns immedately. 2 new files descriptors are now available via the COPROC array, We can send data to our command:

$ echo bar >&${COPROC[1]}

And then read its output:

$ read -u ${COPROC[0]};printf "%s\n" "$REPLY"
foobar

When we don't need our command anymore, we can kill it via its pid:

$ kill $COPROC_PID
$
[1]+  Terminated              coproc COPROC awk '{print "foo" $0;fflush()}'

Named Coprocess

Using a named coprocess is as simple, we just need a compound command like when defining a function:

$ coproc mycoproc { awk '{print "foo" $0;fflush()}' ;}
[1] 23058
$ echo bar >&${mycoproc[1]}
$ read -u ${mycoproc[0]};printf "%s\n" "$REPLY"
foobar
$ kill $mycoproc_PID
$
[1]+  Terminated              coproc mycoproc { awk '{print "foo" $0;fflush()}'; }

Redirecting the output of a script to a file and to the screen

#!/bin/bash
# we start tee in the background
# redirecting its output to the stdout of the script
{ coproc tee { tee logfile ;} >&3 ;} 3>&1 
# we redirect stding and stdout of the script to our coprocess
exec >&${tee[1]} 2>&1

  • the coproc keyword is not specified by POSIX(R)
  • other shells might have different ways to solve the coprocess problem
  • the coproc keyword appeared in Bash version 4.0-alpha
This website uses cookies for visitor traffic analysis. By using the website, you agree with storing the cookies on your computer.More information
Anthony Thyssen, 2010/06/18 02:18

Can you do a coprocess using only regular shell file descriptors? or perhaps using named pipes.

mknod ls_output ls > ls_output &

while read line; do

...

done < ls_output

Jan Schampera, 2010/06/18 08:36

Depending on your specific needs there is of course the possibility to use

  • named pipes
  • process substitution
  • command substitution
  • ...

Coprocesses as described here are just a very easy to use solution for those tasks. It's simple to setup, use and terminate a coprocess.

Anthony Thyssen, 2011/09/28 07:30

Seems to me it is about just a complex as using temporary named pipes.

Of course most difficulties with handling a coprocess is often the handling of the data streams especially if you want more than just stdin/stdout.

Since my initial feedback I have written what I would hope is the start of a guide to using co-processes. And yes it can be very much worth the effort.

http://www.ict.griffith.edu.au/anthony/info/shell/co-processes.hints Feedback welcome Anthony Thyssen A.Thyssen@griffith.edu.au

Jan Schampera, 2011/09/28 20:08

Wow... excellent knowledge collection. I crosslinked it here

SZABĂ“ Gergely, 2012/05/10 08:20, 2012/07/01 11:48

I've found an easier way to read from the coprocess. In the example below szg is an interactive calculator capable of converting hex numbers.

$ coproc S { szg; }
[1] 1234
$ echo XffffD >&${S[1]}
$ head -n 1   <&${S[0]}
65535

George Caswell, 2016/04/13 00:25

You have to be careful of what you attach to file descriptors in the shell. When programs are using buffered I/O they may read more data than they actually need. For instance:

  coproc S { while read line; do echo "$line"; echo "Cool, huh?"; done; }
  echo "Test" >&${S[1]}
  head -1 <&${S[0]}
  Test
  echo "Test2" >&${S[1]}
  head -1 <&${S[0]}
  Test2

Notice that "head" never prints the "Cool, huh?" line? That's because internally "head" is doing something like this:

(during a call to "getline()" or "fgets()" or whatever…):

  • Read in as much data as I presently can into my buffer.
  • Scan buffer for the first newline character.
  • Cut the string from the start of the buffer to the first newline character out of the buffer. Return it.

That aggressive buffering is done to minimize the time spent doing actual I/O. But it means there's a good chance the process will consume more than a single line of input from that file descriptor.

Basically you have to put one process in charge of reading lines out of that file descriptor. The shell can be that process, or you can delegate the task to another process. But most programs aren't written with the assumption that they're sharing their input file.

You could leave a comment if you were logged in.
  • syntax/keywords/coproc.1331039323.txt
  • Last modified: 2012/03/06 13:08
  • by ulidtko