Dieser Post wurde aus meiner alten WordPress-Installation importiert. Sollte es Darstellungsprobleme, falsche Links oder fehlende Bilder geben, bitte einfach hier einen Kommentar hinterlassen. Danke.
Race conditions are nice: Two (or more) tasks are fighting for the same thing at the same time. Row locking on the database may help to avoid such races but only if the database supports them.
mySQL with myISAM claims to supports table locks but in replication environment these locks quickly turn out to be database server (replication) locks: One SELECT locking a row table prevents all write operations, all operations to other tables or databases have to wait in line and other SELECTs for other tables or databases wait for their sources to become "fresh" again.
Todays requirement was a double-run-protection for scripts which need to run twice or more at the same time. Locking the scripts is no option - a weekly job running two hours would block the every-5-minutes-job running only seconds. All of these jobs search for special rows within the same table and all of them are using different WHERE conditions but they still end up processing the same row (matching two or more condition combinations) twice at the same time.
My first thought: Throw away mySQL or at least myIASM and use a database instead, but this is impossible. Next idea: Add a column and UPDATE the current pid (process id) into that column WHERE id=$id_of_my_row AND pid_lock=NULL. The DBI statement $sth->execute method returns the number of rows affected by the update. A positive value (must be 1 because id is a unique primary key) would mean "This row is mine and I could go on processing it" while a zero return value would tell "someone else claimed this one before or it's gone since you SELECTed it, so skip this row", but two facts stopped this idea: Adding a column to a heavily updated 50 million row column (it's not my table layout) would freeze the whole system for quite a while and every write operation is expensive on a mySQL cluster because of the lock risk explained above and every slave repeating the same operation at a very low (inefficient) level. This kind of locking would involve two UPDATES, one setting the lock and one releasing it still being vulnerable for crashing task leaving their locks forever. The processes are spread over different servers, kill 0,$pid won't work for detecting if a locking process is still running.
We're using a really big Memcache cluster which is really fast and doesn't have any of the mySQL replication problems - because Memcache clusters don't have any replication, so why not use it for locking?
The scripts are doing three steps:
- Fetch a thousand rows from the table which candidate for being processed because they match a specific WHERE statement.
- Check all of them for previous processing runs (already seen in the database) which might collide with the current one, checking 1000 in one query is much faster here than checking 1000 items in 1000 queries.
- Run a deep search on every single row in a for loop to check all processing requirements (which can't be checked by the simple WHERE statement in step 1) and process the ones eligible for this run.
Between 1. and 2. within a for loop already doing simple Perl-internal preparations for the 1000 rows:
# Lock one row, skip this row if the lock can't be claimed$memcached->add('Modulename_lock_'.$row_id,1,600) or next;At the end of 3. just before the end of the loop (don't forget any next jumps if you have them):
$memcached->delete('Modulename_lock_'.$row_id); # Remove the lockThe first line adds a new lock on the Memcached cluster which will last for 600 seconds (10 minutes). The add call will fail and return false if a key with this name already exists (read: another process has the lock). The second lines removes the lock from the server freeing the record for other tasks.
Each task will skip items it can't lock, they'll be checked again in the next run which is quite ok for this project. I decided to delete the locks individually because the early records are freed much faster and the whole system will benefit from this. I didn't lock the records between 2. and 3. because step 2 for 1000 rows is nearly 1000 times faster compared to a single-row-check.
The lock is released after 10 minutes if any process crashes, simply because the Memcached stored key times out after those 10 minutes.
But what if 900 records already take 10 minutes for processing? Usually they don't but we're running on a mySQL cluster, other jobs may eat up the cpu time and step 3 might even wait for external servers to respond which might easily add up to a longer-than-usual time.
Two other lines solved the problem: Instead of checking and refreshing the lock for each row, I'd like to skip the unprocessed part of this 1000 rows block, they'll be catched in the next run. I introduced a fixed timestamp which Memcached could handle quite easily by adding a line before the locking loop:
my $lock_expire = time + 600; # 10 minutes from nowThe lock command slightly changed:
# Lock one row, skip this row if the lock can't be claimed$memcached->add('Modulename_lock_'.$row_id,1,$lock_expire) or next;A simple check is required in the 3. for loop:
last if $lock_expire <= time;The lock releasing delete call won't change. All locks are valid until a fixed timestamp from now on and the last line breaks the loop once the time is up and the locks are no longer guaranteed to be in place.
There is one drawback to consider: Memcached's are no database, they may fail and everything must run without them. This solution doesn't because add also returns false if the server didn't answer, consider this when using the sample above. Workaround: Try to set a test key with two seconds lifetime at the beginning of the script and maybe get it just after setting. Any get result which is not undef clearly states that the Memcached is running and the script may have some kind of workaround if the server is offline.
Do you have some Perl experience, want to see this running in real life and consider working in Germany? Send me a mail and you might become part of our team very soon.
Thanks to Torben for bringing this issue to my attention giving me a reason for blogging.
1 Kommentar. Schreib was dazu-
Steffen Mueller
12.06.2012 22:55
Antworten
For this kind of locking in Perl, have a look at IPC::ConcurrencyLimit on CPAN. We wrote it at booking.com and use it quite heavily in various environments. There are multiple locking backends, including the simple local flock based one, a mysql GET_LOCK based backend, and an NFS-based one whose underlying algorithm I've successfully used in a cluster of over 200 nodes at a previous job. For super heavy duty, coarse grained locking, check out Apache ZooKeeper. There's a halfway working ZooKeeper locking backend for IPC::ConcurrencyLimit in my github account, too. Patches welcome.