RejectedSoftware Forums

Sign up

Read write mutex implementation for fibers

For a personal little project, I'm trying to build a rather simple CMS based on a Git repository rather than a complicated database schema. To avoid potential conflicts caused by concurrent file reads/writes, I thought I could use a fiber-affine counterpart to core.sync.rwmutex.

Since I am not aware of such a module already exisiting, I thought I'd try to write it myself. You can find the result of my attempt in this gist.
So far, I've done some simple print-based testing using the simple application in the gist (using version RWMutexPrint). Unfortunately, I currently face issue 1250 on my machine (Arch Linux) didn't get around to test the mutex using the HTTPServerOption.distribute option.

Since I'm not (yet) too familiar with D and vibe.d I'd be glad if someone who is more adept in the internals of the library could have a glance at the code and point out any potential pitfalls.
Thank you in advance! :)

PS: I just registered in this forum and git a server error 500 back. Since I don't know who exactly might be interested in this, I thoght I'd mention it here.

Re: Read write mutex implementation for fibers

On Wed, 23 Sep 2015 12:49:26 GMT, Manuel Frischknecht wrote:

For a personal little project, I'm trying to build a rather simple CMS based on a Git repository rather than a complicated database schema. To avoid potential conflicts caused by concurrent file reads/writes, I thought I could use a fiber-affine counterpart to core.sync.rwmutex.

Since I am not aware of such a module already existing, I thought I'd try to write it myself. You can find the result of my attempt in this gist.
So far, I've done some simple print-based testing using the simple application in the gist (using version RWMutexPrint). Unfortunately, I currently face issue 1250 on my machine (Arch Linux) didn't get around to test the mutex using the HTTPServerOption.distribute option.

DMD 2.068.2 is out since a few days, so it should be working again now.

Since I'm not (yet) too familiar with D and vibe.d I'd be glad if someone who is more adept in the internals of the library could have a glance at the code and point out any potential pitfalls.
Thank you in advance! :)

Since such code is always tricky due to all of the corner cases that exist in low-level multi-threaded code, there are probably things that I haven't considered, but the general approach looks good and sound. The only potential issue that I saw is the use of atomic operations for changing the various counters. Since you already need to lock a mutex, it should be more efficient to simply reuse it for protecting the counters (in the unlock() code, using the mutex would amount to about two atomic ops, but it saves a few more during lock()). Functionally, everything looks correct.

WRT the code itself, it looks like C++ habit that template member functions are written as template(ARGS) { void fn(args) {} }. I'd always prefer the shorter void fn(ARGS)(args) {} syntax if there are no technical reasons against it.

The TaskReadWriteMutexImpl.Mutex class as a non-static nested class already has access to the _state member of TaskReadWriteMutex, so you can either remove its own _state field, or declare it as static class Mutex(...). The latter is probably slightly more efficient, not that it would matter much at that scale*. Along those lines, I'd also declare the classes final, so that virtual calls to lock and unlock can be elided.

Finally, since TaskReadWriteMutexImpl is private, the TaskReadWriteMutex aliases would have no meaningful member documentation in the API reference. Unfortunately, some code duplication will be necessary here for purely "cosmetical" reasons (making them classes and declaring reader and writer in both of them instead of in the base class).

PS: I just registered in this forum and git a server error 500 back. Since I don't know who exactly might be interested in this, I thoght I'd mention it here.

Do you perhaps still remember if there was any more specific error message? Unfortunately I haven't implemented error logging for the forum, yet.

* Thinking about it, another possibility would be to keep them non-static, but convert ReadWriteMutexState to a struct. That saves another GC allocation and access should be as efficient as before (single indirection). The struct should probably get a @disable this(this); postblit constructor to disable accidental copying.

Re: Read write mutex implementation for fibers

Thank you very much for your feedback! Especially for pointing out the short-hand template notation and the nested class access to parent members - I'll have a closer look at that part of the language documentation :)

I've implemented most of your recommendations (and updated the gist), but faced some questions on the way. Assuming those get resolved, do you think the class would be fit for inclusion into vibe.d? If so, I could file a pull request...

atomicOp vs. interrupt()

Since you already need to lock a mutex, it should be more efficient to simply reuse it for protecting the counters (in the unlock() code, using the mutex would amount to about two atomic ops, but it saves a few more during lock()).

As far as I understand it, Task.interrupt() causes an exception to be thrown at the next yielding instruction of a fiber. I reckoned this could lead to an interruption to occur before the counter is decremented in unlock(), thus resulting in a potential deadlock. Am I mistaken in this regard?
As a possible tradeoff I would see to only use a non-interruptible TaskMutex to guard the counters, which would mean that lock() could only lead to interrupts while performing a wait() operation.

shared
Also, I faced a problem as I took apart the definitions for TaskReadWriteMutex and InterruptibleTaskReadWriteMutex (i.e. deleting the alias statements): It seems that the compiler is no longer able to infer the implication of using them as shared objects - and since shared is transitive and I then basically have to "share all the things", I'm no longer able to properly override the core.sync.mutex.Mutex interface, which has no defined shared methods. Also, constructing a shared TaskMutex seems rather tricky.
For now I got around the problem by stripping the shared attribute off the mutex in app.d, but I'm not sure if that's a particularly good idea. It seems to work as a member of a web interface object, but fails if the mutex is defined globally (presumably because it resides in TLS and appears as a null-reference in other threads).

issue 1280
On Sat, 26 Sep 2015 07:12:49 GMT, Sönke Ludwig wrote:

DMD 2.068.2 is out since a few days, so it should be working again now.

Sadly, the issue still persists on my machine. The Arch package claims it has the newest version (dmd-2.068.2-1) but I still get the same error whenever I uncomment the distribute line in my gist:

λ manuel-laptop rwmutex → dub
Performing "debug" build using dmd for x86_64.
vibe-d 0.7.25: target for configuration "libevent" is up to date.
foo ~master: building configuration "application"...
Linking...
To force a rebuild of up-to-date targets, run again with --force.
Running ./foo 
Worker thread terminated due to uncaught error: No event driver loaded. Did the vibe.core.core module constructor run?
Worker thread terminated due to uncaught error: No event driver loaded. Did the vibe.core.core module constructor run?
Program exited with code 255

Sadly, the compiler doesn't print out the whole version string. That said, I did some tests using gdc and ldc and haven't seen any strange behavior so far.

λ manuel-laptop rwmutex → dmd --version
DMD64 D Compiler v2.068
Copyright (c) 1999-2015 by Digital Mars written by Walter Bright

Registration error

Do you perhaps still remember if there was any more specific error message? Unfortunately I haven't implemented error logging for the forum, yet.

Sadly, I didn't pay too much attention at the time, but it seemed like a very generic internal server error message to me.

Re: Read write mutex implementation for fibers

On Sat, 26 Sep 2015 14:26:24 GMT, Manuel Frischknecht wrote:

Sadly, the compiler doesn't print out the whole version string.

Update: Turns out, this was due to some caching issue. After deleting all vibe folders in ~/.dub/packages, everything seems to work fine.

Re: Read write mutex implementation for fibers

Am 26.09.2015 um 20:44 schrieb Manuel Frischknecht:

On Sat, 26 Sep 2015 14:26:24 GMT, Manuel Frischknecht wrote:

Sadly, the compiler doesn't print out the whole version string.

Update: Turns out, this was due to some caching issue. After deleting all vibe folders in ~/.dub/packages, everything seems to work fine.

Right, I didn't think of that. The problem is that DUB currently only
recognizes the minor version of the compiler (2.068), but not the patch
version, because that is what the __VERSION__ variable contains. Thus
any cached builds from a previous patch version will erroneously be reused.

Re: Read write mutex implementation for fibers

Am 26.09.2015 um 16:26 schrieb Manuel Frischknecht:

Thank you very much for your feedback! Especially for pointing out the short-hand template notation and the nested class access to parent members - I'll have a closer look at that part of the language documentation :)

I've implemented most of your recommendations (and updated the gist), but faced some questions on the way. Assuming those get resolved, do you think the class would be fit for inclusion into vibe.d? If so, I could file a pull request...

It looks like a logical extension to the current functionality and the
code looks good, too, so this definitely would be a very welcome addition!

atomicOp vs. interrupt()

Since you already need to lock a mutex, it should be more efficient to simply reuse it for protecting the counters (in the unlock() code, using the mutex would amount to about two atomic ops, but it saves a few more during lock()).
As far as I understand it, Task.interrupt() causes an exception to be thrown at the next yielding instruction of a fiber. I reckoned this could lead to an interruption to occur before the counter is decremented in unlock(), thus resulting in a potential deadlock. Am I mistaken in this regard?
As a possible tradeoff I would see to only use a non-interruptible TaskMutex to guard the counters, which would mean that lock() could only lead to interrupts while performing a wait() operation.

Actually, I'd even just use a normal Mutex in this case, as the fiber
never yields while it is locked, so the Task part of the functionality
isn't really necessary anyway. It should also be a bit more efficient.

shared
Also, I faced a problem as I took apart the definitions for TaskReadWriteMutex and InterruptibleTaskReadWriteMutex (i.e. deleting the alias statements): It seems that the compiler is no longer able to infer the implication of using them as shared objects - and since shared is transitive and I then basically have to "share all the things", I'm no longer able to properly override the core.sync.mutex.Mutex interface, which has no defined shared methods. Also, constructing a shared TaskMutex seems rather tricky.
For now I got around the problem by stripping the shared attribute off the mutex in app.d, but I'm not sure if that's a particularly good idea. It seems to work as a member of a web interface object, but fails if the mutex is defined globally (presumably because it resides in TLS and appears as a null-reference in other threads).

Yeah, the shared situation generally still is very unfortunate in many
ways (especially since proper deprecation/upgrade paths to improve it
are often difficult). But did you try to make the two classes template
classes with an empty template parameter list? That should bring back
the attribute inference of the old code (attributes are only inferred
within templated functions and aggregates).

(...)

Registration error

Do you perhaps still remember if there was any more specific error message? Unfortunately I haven't implemented error logging for the forum, yet.

Sadly, I didn't pay too much attention at the time, but it seemed like a very generic internal server error message to me.

I reproduced it now, only to discover that this had actually already
been fixed in code quite a while ago ;) The current version of the
forums with the fix included is deployed now. Thanks for the notice!

Re: Read write mutex implementation for fibers

Again many thanks for your advice!

I've added the mentioned changes and filed a pull request here. There still are some caveats, but it seems easier for me to comment on the according lines through the Github UI.