On Sat, 08 Feb 2014 18:19:11 GMT, Kelet wrote:

Now as I understand, compared to just throwing around function pointers/delegates, the advantage of using Fibers in this architecture is that it makes it easier to debug.
Can someone clarify my assumptions or recommend why vibe-d might be a better fit than I think, or recommend another approach?

  • Fibers are most useful when you're waiting to receive or send data with any i/o operation. Even if it takes 1 msec to send your response data to the client, instead of the thread being returned to the O/S while data is being sent, it'll return to the application's event loop for other tasks / fibers and send more bytes on the next run of the fiber. A lot of web servers create a new process in the kernel for each web request in order to solve this but it uses more memory and it's also more segmented (you can't benefit easily from shared memory because the processes are segmented).

  • Fibers make it possible to read/write to the same in-process memory, while other systems could have to use some sort of IPC for it.

Multi-threading request handling isn't the most interesting part here because if it were single-threaded, you could still start a thread to run long operations and yield the fiber to handle another request. I can't imagine saturating all the CPUs with requests, the bottleneck would have to be somewhere else. It sounds like your collision detection algorithm could pull most of the advantages of multi-threading - if you write an efficient algorithm that distributes the operations evenly to all CPUs, the latency would be very low, but you'd still have to yield the fiber after starting this operation. The fiber that handles the web request / which is waiting for an answer won't block other requests from coming in and being handled while it's yielded. You could even call a C library that already has the optimal source code for this if you wanted, you could run the operations in openCV through a new task and yield or such, you'd reduce latency to a minimum.