RejectedSoftware Forums

Sign up

Request local allocations

I've been thinking for a while about implementing some form of request
local allocations in one of my vibe.d applications. It would use
std.experimental.allocator with a thread/fiber local static array as the
memory with malloc as a fallback allocator. My idea is that it would
work like a bump the pointer alloctor which would at the end of a
request deallocate the memory by resetting the pointer to the beginning
of the buffer. To me this sounds really efficient, at least in theory.
Especially since most data is request specific and will not be shared
between request. I do have a couple of questions:

  1. To begin with, does this sound like a good idea in the first place?

  2. Is it enough if the array is thread local or does it need to be fiber
    local as well?

  3. How would I best integrate this if I use the REST API generator? As
    far as I understand vibe.d doesn't have any support for middleware or
    before/after hooks, which would be an ideal place for the setup and tear
    down of the buffers automatically

/Jacob Carlborg

Re: Request local allocations

On Sun, 2 Apr 2017 12:41:54 +0200, Jacob Carlborg wrote:

I've been thinking for a while about implementing some form of request
local allocations in one of my vibe.d applications. It would use
std.experimental.allocator with a thread/fiber local static array as the
memory with malloc as a fallback allocator. My idea is that it would
work like a bump the pointer alloctor which would at the end of a
request deallocate the memory by resetting the pointer to the beginning
of the buffer. To me this sounds really efficient, at least in theory.
Especially since most data is request specific and will not be shared
between request. I do have a couple of questions:

  1. To begin with, does this sound like a good idea in the first place?

It is good idea only if compiler can prevent leak of pointers to this storage out of request scope. How do you want to achieve this?

  1. Is it enough if the array is thread local or does it need to be fiber

local as well?

Many requests can be processed by the same thread at the same time, so thread-local is not enough, I think.

Re: Request local allocations

On 2017-04-03 19:01, Alexey Kulentsov wrote:

It is good idea only if compiler can prevent leak of pointers to this storage out of request scope. How do you want to achieve this?

That would not be possible with the current compiler, as far as I know.

Many requests can be processed by the same thread at the same time, so thread-local is not enough, I think.

Not really at the same time since a thread can only do one thing at the
same time. Although multiple requests can be active on the same thread.
Which will most likely result in a fragmented buffer, which will most
likely not work.

/Jacob Carlborg

Re: Request local allocations

On Sun, 2 Apr 2017 12:41:54 +0200, Jacob Carlborg wrote:

I've been thinking for a while about implementing some form of request
local allocations in one of my vibe.d applications. It would use
std.experimental.allocator with a thread/fiber local static array as the
memory with malloc as a fallback allocator. My idea is that it would
work like a bump the pointer alloctor which would at the end of a
request deallocate the memory by resetting the pointer to the beginning
of the buffer. To me this sounds really efficient, at least in theory.
Especially since most data is request specific and will not be shared
between request. I do have a couple of questions:

  1. To begin with, does this sound like a good idea in the first place?

  2. Is it enough if the array is thread local or does it need to be fiber

local as well?

  1. How would I best integrate this if I use the REST API generator? As

far as I understand vibe.d doesn't have any support for middleware or
before/after hooks, which would be an ideal place for the setup and tear
down of the buffers automatically

/Jacob Carlborg

If the allocator can be designed to be initialized lazily, using a TaskLocal!Allocator global (TLS) variable should work. The destructor will be run at the end of each task invocation. If creating/destroying the allocator per task is likely to be too inefficient, I'd probably use a thread local free list of allocators where each task just takes one and puts it back after use. This could also be wrapped within a lazy initialized task local variable to make it transparent to use.

A HTTP specific solution could also inject a middleware by simply wrapping the request handler:

auto router = new URLRouter;
// ...

void setupAllocator(HTTPServerRequest req, HTTPServerResponse res) {
   allocator.setup();
   scope (exit) allocator.teardown();
   router.handleRequest(req, res);
}

listenHTTP(httpsettings, &setupAllocator);

BTW, the common form of "middleware" for vibe.d works like the vibe.http.auth package, where the middleware injector takes the original request handler callback as a parameter and returns another one that wraps it.

Re: Request local allocations

On 2017-04-14 16:04, Sönke Ludwig wrote:

If the allocator can be designed to be initialized lazily, using a TaskLocal!Allocator global (TLS) variable should work. The destructor will be run at the end of each task invocation. If creating/destroying the allocator per task is likely to be too inefficient, I'd probably use a thread local free list of allocators where each task just takes one and puts it back after use. This could also be wrapped within a lazy initialized task local variable to make it transparent to use.

A free list sounds like a good idea. The example below looks like a
simpler solution than a task local variable.

A HTTP specific solution could also inject a middleware by simply wrapping the request handler:

auto router = new URLRouter;
// ...

void setupAllocator(HTTPServerRequest req, HTTPServerResponse res) {
    allocator.setup();
    scope (exit) allocator.teardown();
    router.handleRequest(req, res);
}

listenHTTP(httpsettings, &setupAllocator);

Aha, I didn't know that, thanks.

I guess I need to do some performance testing to really see which
solution works best.

/Jacob Carlborg