On Sat, 18 Jul 2015 06:45:27 GMT, tchaloupka wrote:

On Sat, 18 Jul 2015 00:45:56 GMT, Taylor Gronka wrote:

This article and example seems like it might answer many of your questions:
http://vibed.org/blog/posts/writing-native-db-drivers

Yeah, I already read that. But I don't think that can be applied to sqlite.
Or can it? How? I can't imagine other way than to modify somehow sqlite C code itself, which seems wrong to me.

Yeah, that would require first porting sqlite to D. That post was more targeted at library already written in D.

So if sqlite has to stay as it is, is there some preffered pattern to use it effectively?

I thought of something like this:

FIBER THREAD                                  SQLITE WORKING THREAD

received request          ------>             load
                                              some
do other things                               data

write response            <------             send loaded data

There can even be a pool of workers to handle requests.

But I'm pretty new to D and vibe (mainly C#) so I'm not sure if this is possible to make this more effective than just use sqlite directly to handle requests even with its blocking.

What I'm after is just simple self hosted RESTful service with sqlite backend (required).
Currently I'm using it in a blocking manner with HTTPServerOption.distribute and when simply benchmarked, it is even then faster than node.js or C# implementation.

But I would like some advice if it can be done in a better way.

Thx,
Tom

If you are working with an SSD instead of a spinning disk and want to use HTTPServerOption.distribute anyway, using sqlite in-place shouldn't disturb the event processing too much. If you have multiple sqlite invocations in sequence, you could explicitly call yield() in-between to reduce the latency of other tasks/requests, but since the operations are usually relatively short, they shouldn't be a big issue.

That approach will also yield the highest overall request throughput. Basically what a worker approach could improve is reducing the average maximum latency to process a request. But the communication and synchronization overhead would definitely somewhat reduce the throughput.

For the latter approach, I'd fire up a single custom core.thread.Thread that uses runTask to start a task that listens for sqlite requests made via vibe.core.concurrency. Basic sketch of the idea:

__gshared g_sqliteTask;

struct SqliteRequest {
    string query;
    Task requester;
}

void sqliteWorker()
{
    g_sqliteTask = runTask({
        while (true) {
            auto req = receiveOnly!SqliteRequest;
            // perform query
            auto results = ...;
            // note: results must be immutable or shared
            req.requester.send(results);
        }
    });
    runEventLoop();
}

void handleRequest(HTTPServerRequest req, HTTPServerResponse res)
{
    string query = ...;
    q_sqliteTask.send(SqliteRequest(query, Task.getThis()));
    auto results = receiveOnly!SqliteResults;
    res.writeBody("Results: ", results.to!string());
}

shared static this()
{
    import core.thread;
    new Thread(&sqliteWorker).run();

    // setup HTTP server etc.:
    // ....
}

I'd not use multiple worker threads, because it would introduce the need to synchronize between them. Usually a single thread should also be sufficient to put the disk to full load. At least for a spinning disk, multiple threads would just cause the head to make more movements and thus would reduce the overall performance. But this depends on the actual usage patterns. There are probably scenarios where the CPU load gets high enough that multiple workers actually improve performance.