On Fri, 21 Dec 2012 00:21:25 GMT, punkUser wrote:

The Signal is usable in two ways:

  1. The normal way, wait()ing for the signal to be emitted. Multiple fibers can wait (block) for the signal and all will continue when the singal is emitted.

Ok so this is effectively what a TcpConnection will do when I do a "receive" for N bytes until that many bytes are available? So no other signals than the one it is waiting on will wake the fiber at that point, correct? That's actually the desirable behaviour I think.

Yes, correct

After rawYield() continues, the event that caused this can be indirectly determined by asking the TCP connection if data is available, or by asking the Signal if the emit count was increased.

Alright, that makes sense. I implemented this model with a simple queue of outgoing packets. The main loop basically checks for incoming data, and consumes it all if any is available (it'll block waiting for a full packet to come in, etc). After that if there is anything pending in the send queue it will flush that all out (again, presumably blocking if the socket send buffer fills) and then do a rawYield. Basically the same model as the broadcast example, except I use one signal per client which fires when anything is added to that client's queue, as not all cross-client messages are broadcasts.

Seems to work properly, so modulo figuring out a policy for maximum send queue size, etc. I'm happy :)

I consider this just a temporary compromise to make this kind of bidirectional braodcasting work at all, with the danger that it relies on the way events are handled internally, which might (although probably not) change in the future or between different back ends.

Right, I'll put a big warning in my code that it might break in the future :) Presumably if it does break, the broadcast example will be updated with the new "best known method"?

One thing I thought about was writing a messaging API, compatible to std.concurrency, based on the fiber aware Signal/Mutex classes. That would cover such use cases in a nice and safe way and would also allow for painless migration if std.concurrency should ever support such fiber-aware thread primitives.