- Article pages now have a discussion option at the bottom (moderated/captcha, but no registration needed)

Lock your script (against parallel run)

Why lock?

Sometimes there's the need to ensure that a script is only executed one time. Imagine some cronjob to do something very important, which will fail or corrupt data if it accidently runs twice. In these cases, a form of MUTEX (mutual exclusion) is needed.

The basic procedure is simple: The script checks out if a specific condition (locking) is present at startup, if yes, it's locked - it doesn't start.

This article describes the locking with common UNIX® tools. There are various other special locking tools outside, of course. But they're not standardized, or better: You can't be sure that they're present where you want to run your scripts. Of course, a tool designed for exactly this purpose does the job much better than all general code in here.

Other, special locking tools

As told above, a special tool for locking is the 100% solution. You don't have race conditions, you don't need to work around specific limits, and all those issues.

Choose the locking method

The best way to set a global lock condition is the UNIX® filesystem. Variables aren't enough, as each process has its own private variable space, but the filesystem is global to all processes (yes, I know about chroots, namespaces, … special case). You can "set" several things in a filesystem that can be used as locking indicator:

  • create files
  • update file timestamps
  • create directories

To create a file or set a file timestamp, usually the command touch is used. That implies the following problem: A locking mechanism would check the existance of the lockfile, if it doesn't exist, it would create one (lock) and continue. These are two steps! That means, it's not one atomic operation. There's a small amount of time between checking and creating, where another instance of the same script could perform locking (because when it checked, the lockfile wasn't there)! In that case you would have 2 instances of the script running, both think they succesfully locked, and both think they can operate without collisions. Setting the timestamp would be similar: One step to check the timespamp, a second step to set the timestamp.

Conclusion: We need an operation that does the check and the locking in one step.

A simple way to get that is to create a lock directory - the mkdir command. It will

  • create a given directory only if it did not exist before, and set a successful exit code
  • it will set an unsuccesful exit code if an error occours - for example if the given directory already existed

With mkdir it seems, we have our two steps in one simple operation. A (very!) simple locking code might look like this now:

if mkdir /var/lock/mylock; then
  echo "Locking succeeded" >&2
else
  echo "Lock failed - exit" >&2
  exit 1
fi
In case mkdir reports an error, the script will exit at this point - the MUTEX did its job!

In case the directory is removed after setting a successful lock while the script is still running, the lock is lost.Doing chmod -w for parent directory containing the lock directory can be done but it is also not atomic.Maybe a while loop checking continously for the existence of the lock in background and sending a signal such as USR1 if the directory is found non-existent can be done.The signal would need to be trapped.I am sure there would be a better solution than this suggestionsn18 2009/12/19 08:24

Note: On my way through the Internet I found some people wondering if the mkdir way will work "on all filesystems". Well, let's say it should. The syscall under mkdir is guarenteed to work atomic in all cases, at least on Unices. A problem can be a shared filesystem on NFS or a real cluster filesystem. There it depends on the mount options and the implementation. However, I successfully use this simple way on top of an Oracle OCFS2 filesystem in a 4-node cluster environment. So let's just say "it's expected to work under normal conditions".

Another atomic method is setting the noclobber shell option (set -C), which will cause a redirection to fail if the file the redirection points to already exists (using diverse open() methods). This is also a very nice way, and I use this more simple locking method successfully in production, too. Need to write a code example here.

if ( set -o noclobber; echo "locked" > "$lockfile") 2> /dev/null; then
  trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
  echo "Locking succeeded" >&2
  rm -f "$lockfile"
else
  echo "Lock failed - exit" >&2
  exit 1
fi

Another explanation of this basic locking pattern using set -C can be found here.

An example

This code was taken from a script that controls PISG to create statistical pages from my IRC logfiles. It doesn't matter for you, I just note that to tell you that this code works and is used. There are some additional things compared to the very simple example above:

  • the locking stores the process ID of the locked instance
  • if a lock fails, the script tries to find out if the locked instance still is active (unreliable!)
  • traps, to automatically remove the lock when the script terminates or is killed, are created

I don't show various details - like determinating the signal by which the script was killed - here, I just show the most relevant code:

#!/bin/bash
 
# lock dirs/files
LOCKDIR="/tmp/statsgen-lock"
PIDFILE="${LOCKDIR}/PID"
 
# exit codes and text for them - additional features nobody needs :-)
ENO_SUCCESS=0; ETXT[0]="ENO_SUCCESS"
ENO_GENERAL=1; ETXT[1]="ENO_GENERAL"
ENO_LOCKFAIL=2; ETXT[2]="ENO_LOCKFAIL"
ENO_RECVSIG=3; ETXT[3]="ENO_RECVSIG"
 
###
### start locking attempt
###
 
trap 'ECODE=$?; echo "[statsgen] Exit: ${ETXT[ECODE]}($ECODE)" >&2' 0
echo -n "[statsgen] Locking: " >&2
 
if mkdir "${LOCKDIR}" &>/dev/null; then
 
    # lock succeeded, install signal handlers before storing the PID just in case 
    # storing the PID fails
    trap 'ECODE=$?;
          echo "[statsgen] Removing lock. Exit: ${ETXT[ECODE]}($ECODE)" >&2
          rm -rf "${LOCKDIR}"' 0
    echo "$$" >"${PIDFILE}" 
    # the following handler will exit the script on receiving these signals
    # the trap on "0" (EXIT) from above will be triggered by this trap's "exit" command!
    trap 'echo "[statsgen] Killed by a signal." >&2
          exit ${ENO_RECVSIG}' 1 2 3 15
    echo "success, installed signal handlers"
 
else
 
    # lock failed, now check if the other PID is alive
    OTHERPID="$(cat "${PIDFILE}")"
 
    # if cat wasn't able to read the file anymore, another instance probably is
    # about to remove the lock -- exit, we're *still* locked
    #  Thanks to Grzegorz Wierzowiecki for pointing this race condition out on
    #  http://wiki.grzegorz.wierzowiecki.pl/code:mutex-in-bash
    if [ $? != 0 ]; then
      echo "lock failed, PID ${OTHERPID} is active" >&2
      exit ${ENO_LOCKFAIL}
    fi
 
    if ! kill -0 $OTHERPID &>/dev/null; then
        # lock is stale, remove it and restart
        echo "removing stale lock of nonexistant PID ${OTHERPID}" >&2
        rm -rf "${LOCKDIR}"
        echo "[statsgen] restarting myself" >&2
        exec "$0" "$@"
    else
        # lock is valid and OTHERPID is active - exit, we're locked!
        echo "lock failed, PID ${OTHERPID} is active" >&2
        exit ${ENO_LOCKFAIL}
    fi
 
fi

Discussion

RaftaMan, 2010/05/26 19:13

Restarting with

#exec $0 "$@"

is probably not a good idea though it only works if the script is called from the directory it is contained in. Maybe this

DIR=$(cd $(dirname "$0"); pwd)
exec $DIR/`basename $0` "$@"

is a little better

Jan Schampera, 2010/05/27 06:34

If no chdir() was made, then a exec "$0" should work fine IMHO. Can you show me an example?

RaftaMan, 2010/05/27 07:22

Ok, you're right. It is indeed not a directory problem. But

exec $0 "$@"

can still go wrong, depending on how it's called

# mkdir -p /tmp/statsgen-lock; echo 99999 > /tmp/statsgen-lock/PID; sh statsgen-lock 
[statsgen] Locking: removing stale lock of nonexistant PID 99999
[statsgen] restarting myself
statsgen-lock: line 53: exec: statsgen-lock: not found
[statsgen] Exit: ENO_SUCCESS(0)

while this

# mkdir -p /tmp/statsgen-lock; echo 99999 > /tmp/statsgen-lock/PID; ./statsgen-lock 
[statsgen] Locking: removing stale lock of nonexistant PID 99999
[statsgen] restarting myself
[statsgen] Locking: success, installed signal handlers
[statsgen] Removing lock. Exit: ENO_SUCCESS(0)

works flawlessly.

Jan Schampera, 2010/05/27 18:17

Yea, that actually *is* a problem. But there's no generic solution for that IMHO, since the script couldn't have execution permissions at all, or the shell "sh" is not a Bash etc. Maybe this just needs to be seen as "implementation specific".

Icy, 2010/08/11 17:49

Tim has a very nice tool for locking: http://timkay.com/solo/. Of course, his 'solo' isn't a Bash way ;)

Alan, 2011/11/15 09:08

Er, you could just use lockf(1) on systems that provide it. It's available on current versions of Mac OS X (Darwin), FreeBSD, etc., and should be available on other Unix and Unix-like platforms. It's probably not NFS-safe... but that's why things like /tmp exist. http://www.freebsd.org/cgi/man.cgi?query=lockf&manpath=FreeBSD+2.2.8-RELEASE&arch=default&format=html

Val, 2011/12/16 18:38

I just found this now and I want to add quite an important note to this excellent topic. Using signals in bash is risky. bash usually doesn't treat signals if it waits for a subprocess to end ( actually waits for them to finish and only AFTER that fires up the signal handlers). So the TERM signal there may or may not work all the time. The extra check for a stale or non-existent process in your code is very important since is making up for this (bad) bash functionality.

Mike Spooner, 2012/09/21 18:21

The shell "noclobber" option does *not* provide atomicity in almost all "noclobber-capable" shells, including Bash (at least up to version 3.00, the XPG4/5/6 shell, the POSIX 1003.2 shell, the various Korn shell implementations - ksh86, ksh88, CDE dtksh, Irix wksh, pdksh, ksh93 (at least upto revision 's') and also some C-Shell implementations including tcsh and Solaris csh. I'm not sure about zsh.

A system-call trace of all of these (where available) in action on Linux (using strace), Solaris (using truss) and Irix (using par), SunOS 4.1.3 (trace) show that *all* the above shells perform noclobber protection by calling stat() or fstat() to test for the presence of the file and *then* if stat() fails call open(…, O_CREAT) - *without* O_EXCL. There's a small window of opportunity there, I've seen it accidentally "exploited" many times.

The correct way for Bash, ksh, et al to do this would be to ditch the stat()/fstat() call and simply do:

   open(..., O_CREAT|O_EXCL);

but O_EXCL is not available on *very* old 1980's UNIXen (it was standardised by POSIX-1-1988, and pretty much universally available by 1991). Some systems accept but do not honour O_EXCL for remotely-mounted files, so you can only absolutely rely on it for local files.

Jan Schampera, 2012/09/21 21:41

Hello Mike,

thank you very much. I have to admit I never traced the syscalls Bash does there, nor checked the code. Instead I blindly assumed they open() like you suggest.

This is indeed a race condition that kills the atomicity concept here. I'll fix that.

Stepan Vavra, 2012/10/31 02:24

I know you warned readers that the script tries to find out if the locked instance is still active which is not reliable. I'm not sure, however, if you are really aware of the fact that when the lock is stale you may remove a directory that actually belongs to a process that just removed the directory of the inactive process but was able to create it and thus it just obtained the lock!

Consider this situation (there are lot more examples similar to this one):

1. P1 is running in the CS and has the lock

2. both P2 and P3 fail to obtain the lock

3. both P2 and P3 get the value of P1 pid by calling the cat command

4. Now, P1 ends and exits CS and thus removes the lock dir

5. both P2 and P3 run the kill command and they find out that the process P1 is not active

6. WLOG, P2 tries to remove the lock dir

7. P2 is still running and creates the lock dir

8. P2 entered CS

9. Now, P3 is scheduled by OS to run which means it removes the lock dir P2 created (FYI, the last thing P3 did was the kill command)

10. P3 creates the lock dir and enters CS

11. BOTH P2 and P3 ARE in CRITICAL SECTION

Apparently, when removing the directory you should be in a critical section, because reading a PID from a file and directory removal afterwards are two different commands and you never know what happened between them.

Jan Schampera, 2012/10/31 07:06

Thank you for your input.

I'm aware that reliably detecting and removing a "stale" lock is impossible with these mechanisms. Infact, the original variant didn't even mention this (thus removing a stale lock was up to the user).

A locking mechanism that reliably does that needs to do the steps of 1. checking for 2. active lock (which usually is implicit) and 3. locking in one atomic operation. The problem you describe above is that the "staleness check" (and staleness removal) isn't included in the very same operation. IMHO this can only be done with OS support (i.e. locks organized by the OS, like the typical lock files where you aquire a byte-range lock on the file, etc.). Unfortunately this isn't C :-) but several shell locking utilities access those or similar OS mechanisms and can protect your critical sections and are using shell-usable methods on the userspace side.

(I wrote this in this detail for the interested reader, I'm sure you, Stepan, definitely know what I mean).

Enter your comment. Wiki syntax is allowed:
 
howto/mutex.txt · Last modified: 2013/08/03 22:19 by ormaaj
GNU Free Documentation License 1.3
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0