RejectedSoftware Forums

Sign up

TaskMutex lock behavior

Look at this code

import vibe.d;

__gshared TaskMutex mutex;

void foo(string name) {
	synchronized(mutex) sleep(2.seconds);
	logInfo(name);
}

shared static this() {
	mutex = new TaskMutex;
	runTask({
		while(true) foo("task 1");
	});
	runTask({
		while(true) foo("task 2");
	});
	runTask({
		while(true) foo("task 3");
	});
}

In this case task 2 and task 3 never gets control. Output is

task 1
task 1
task 1
...

Workaround is calling yield() after unlocking mutex:

void foo(string name) {
	synchronized(mutex) sleep(2.seconds);
	yield();
	logInfo(name);
}

Is this a bug or a feature?

Re: TaskMutex lock behavior

On Fri, 04 Apr 2014 09:26:26 GMT, Jack Applegame wrote:

Look at this code

import vibe.d;

__gshared TaskMutex mutex;

void foo(string name) {
	synchronized(mutex) sleep(2.seconds);
	logInfo(name);
}

shared static this() {
	mutex = new TaskMutex;
	runTask({
		while(true) foo("task 1");
	});
	runTask({
		while(true) foo("task 2");
	});
	runTask({
		while(true) foo("task 3");
	});
}

In this case task 2 and task 3 never gets control. Output is

task 1
task 1
task 1
...

Workaround is calling yield() after unlocking mutex:

void foo(string name) {
	synchronized(mutex) sleep(2.seconds);
	yield();
	logInfo(name);
}

Is this a bug or a feature?

The mutex here is a wall that forces other tasks to yield when trying to lock it (whatever the thread they're in). Once the lock is freed, the waiters are notified and resumed at the next run of the event loop.

However, this seems to happen within a single thread and your first task blocks this thread because it's in an infinite loop. Leaving a synchronization scope shouldn't force a task to yield because even though a context switch is very light, it shouldn't be abused. You may see a difference if you actually include some multi-threading into the equation using runWorkerTask.

Re: TaskMutex lock behavior

On Sat, 05 Apr 2014 03:43:07 GMT, Etienne Cimon wrote:

The mutex here is a wall that forces other tasks to yield when trying to lock it (whatever the thread they're in). Once the lock is freed, the waiters are notified and resumed at the next run of the event loop.

However, this seems to happen within a single thread and your first task blocks this thread because it's in an infinite loop. Leaving a synchronization scope shouldn't force a task to yield because even though a context switch is very light, it shouldn't be abused. You may see a difference if you actually include some multi-threading into the equation using runWorkerTask.

I know how mutex works. My main question: is it planned behavior or is it necessary to fix?

Re: TaskMutex lock behavior

On Sat, 05 Apr 2014 04:42:52 GMT, Jack Applegame wrote:

On Sat, 05 Apr 2014 03:43:07 GMT, Etienne Cimon wrote:

The mutex here is a wall that forces other tasks to yield when trying to lock it (whatever the thread they're in). Once the lock is freed, the waiters are notified and resumed at the next run of the event loop.

However, this seems to happen within a single thread and your first task blocks this thread because it's in an infinite loop. Leaving a synchronization scope shouldn't force a task to yield because even though a context switch is very light, it shouldn't be abused. You may see a difference if you actually include some multi-threading into the equation using runWorkerTask.

I know how mutex works. My main question: is it planned behavior or is it necessary to fix?

Asking if it's the planned behavior of a mutex comes down to checking if this is really how a mutex works. I've read enough about the subject to answer that yes, this is the planned behavior - unless vibe.d has a hidden agenda. The reason you're seeing this problem is because the planned behavior covers heavy i/o, not heavy CPU (unless you yield frequently). This is a problem that I've been exploring right now - a solution would be to use a compute proxy thread with a dedicated task pool, which would be great for a thread-blocking operation that involves some i/o as well.

Re: TaskMutex lock behavior

On Sat, 05 Apr 2014 14:05:27 GMT, Etienne Cimon wrote:

On Sat, 05 Apr 2014 04:42:52 GMT, Jack Applegame wrote:

On Sat, 05 Apr 2014 03:43:07 GMT, Etienne Cimon wrote:

The mutex here is a wall that forces other tasks to yield when trying to lock it (whatever the thread they're in). Once the lock is freed, the waiters are notified and resumed at the next run of the event loop.

However, this seems to happen within a single thread and your first task blocks this thread because it's in an infinite loop. Leaving a synchronization scope shouldn't force a task to yield because even though a context switch is very light, it shouldn't be abused. You may see a difference if you actually include some multi-threading into the equation using runWorkerTask.

I know how mutex works. My main question: is it planned behavior or is it necessary to fix?

Asking if it's the planned behavior of a mutex comes down to checking if this is really how a mutex works. I've read enough about the subject to answer that yes, this is the planned behavior - unless vibe.d has a hidden agenda. The reason you're seeing this problem is because the planned behavior covers heavy i/o, not heavy CPU (unless you yield frequently). This is a problem that I've been exploring right now - a solution would be to use a compute proxy thread with a dedicated task pool, which would be great for a thread-blocking operation that involves some i/o as well.

I'm saying it's like a heavy compute job because if you factor out the mutex, you'd essentially be doing this:

runTask({ while(true) logInfo("task 1"); });

Re: TaskMutex lock behavior

In other words. in single-threaded configuration, task never yields and locks the mutex again and again, if there are no blocking operations between unlocking and locking. Even if thousand others tasks are waiting for the mutex.
Looks strange for me, because trying to lock the mutex is a blocking operation and should force task to yiled if mutex is unlocked and other tasks already waiting.

Re: TaskMutex lock behavior

On Sat, 05 Apr 2014 15:56:52 GMT, Jack Applegame wrote:

In other words. in single-threaded configuration, task never yields and locks the mutex again and again, if there are no blocking operations between unlocking and locking. Even if thousand others tasks are waiting for the mutex.
Looks strange for me, because trying to lock the mutex is a blocking operation and should force task to yiled if mutex is unlocked and other tasks already waiting.

Yes, but usually you have some activity on that task like packets going in or out, sleeps occurring, events being waited for. These all happen to yield the task, and when it's done the tasks are finished. It may be preferable to sleep outside of your lock to simulate these events. For anything that involves heavy compute like image or video rendering, you're better off using a new thread or process and avoiding putting it in tasks altogether, or you might block your i/o event loop (ie. webserver).

Re: TaskMutex lock behavior

Am 05.04.2014 06:42, schrieb Jack Applegame:

On Sat, 05 Apr 2014 03:43:07 GMT, Etienne Cimon wrote:

The mutex here is a wall that forces other tasks to yield when trying to lock it (whatever the thread they're in). Once the lock is freed, the waiters are notified and resumed at the next run of the event loop.

However, this seems to happen within a single thread and your first task blocks this thread because it's in an infinite loop. Leaving a synchronization scope shouldn't force a task to yield because even though a context switch is very light, it shouldn't be abused. You may see a difference if you actually include some multi-threading into the equation using runWorkerTask.

I know how mutex works. My main question: is it planned behavior or is it necessary to fix?

My first idea was in line with Etienne that the behavior in this case is
expected in the sense that it's "implementation dependent". Everything
else would potentially incur an overhead in many places where this isn't
necessary. Also it would mean that then the "unlock" operation would
also be "blocking", which can be very counter intuitive.

On the other hand I see that this may cause a very unexpected and hidden
kind of bug, which may actually be a pretty strong counter argument. How
bad was your experience with it in this case?

On the other hand it's a good indicator for practically serial code that
possibly should architecturally be running in the same task to save the
overhead of a mutex in the first place.

Re: TaskMutex lock behavior

On Sun, 06 Apr 2014 09:35:51 +0200, Sönke Ludwig wrote:

How bad was your experience with it in this case?

Not very bad. I expected something like that.

I develop quite a large application with heavy i/o to/from external web-services. And I need something like a Tasks queue. The order is not important, but no task should wait forever.
Now I use TaskMutex with yield() after unlocking, because there are no blocking operation between locking and unlocking the mutex. But I don't like this idea. As I understand a generic platform independent mutex doesn't guarantee any order of aquiring locks.
What would you recommend? Using TaskMutex or it may be better to write some Tasks queue?