On Fri, 07 Feb 2014 19:05:16 GMT, Kelet wrote:

Thank you Sönke and Stefan for the replies!

I guess my thoughts are that compared to just throwing function pointers or delegates into a queue to be ran would come to similar performance (if not better) for a certain case of situations. Is this true? Basically, if sleep is unused, and yields are unused (let us assume that the functions all execute fast and thus don't need yield), then just maintaining a queue of function pointers/delegates would be faster? Or am I misunderstanding.

Simply throwing function pointers in the queue is faster. Or if not faster it certainly uses less memory. But it's a much harder model to program to, and making sense of someone else's code written this way is tricky. You're basically creating a state machine as a stand-in for not having a persistent stack to store state on, and either this ends up being a gigantic switch statement (unmaintainable but easy to follow the program flow) or a bunch of loosely related function pointers (super flexible but difficult to follow and with next to no context info when an error occurs).

The real advantage of using fibers like vibe.d does is that it dramatically lowers the barrier for entry, while still maintaining comparable performance. You'll probably end up with more cache misses since each fiber has its own stack, but backtraces are suddenly possible and you can have people who don't understand async programming working on the code.

One thing I'm not entirely sure of is how vibe.d handles outbound connections. Like say I get a request from the user. This request comes across some client connection and so is backed by its own fiber. Now what if I need to issue a request to some other service while processing this transaction? Any IO on the secondary connection needs to yield and return to that client connection's stack. I imagine vibe.d manages this automatically, but it's a bit further than I've gotten in my own experimentation. I suppose an alternative would be to spawn the outbound connection in its own fiber and use message passing to mediate between them. Then the client connection would block on receive() until that secondary request completes and sends its response.