Am 21.05.2013 14:20, schrieb Matthew Fong:

On Sat, 18 May 2013 08:42:20 GMT, Sönke Ludwig wrote:

On Sat, 18 May 2013 01:23:01 GMT, Puming Zhao wrote:

On Fri, 17 May 2013 18:44:57 GMT, Sönke Ludwig wrote:

On Wed, 08 May 2013 11:12:40 GMT, Matthew Fong wrote:

  1. What use would this have? I can't come up with an example scenario where you would want to "bulk" request HTTP.

The most prominent example is probably a web browser, that could e.g. first collect all style sheets or script files needed by a certain HTML page and then make a single pipelined bulk request to minimize latency/protocol overhead. But basically any scenario where multiple requests to the same server need to be done and they don't depend on each other applies. Especially if latency and overall performance is a concern. Another possible example would be a client for a database server with a REST interface (e.g. CouchDB).

This is actually HTTP Pipelining, a standard practice in HTTP1.1, see http://en.wikipedia.org/wiki/HTTP_pipelining

Thanks for the link, it gives a much better description than mine.

  1. The current API allows one to use a callback to modify the request headers. When passing many requests though, you would probably want to customize each request on their own, requiring a different callback. Could this be solved with an associative array for example?

One possibility would be to use the request URL to identify each request in the callback and modify them accordingly. But thinking about it, maybe a completely different API makes sense, for example to allow a single HttpClient to be passed to multiple tasks and automatically pipelining multiple concurrent requests. I have to admit that I'm missing a concrete use case so as to make an informed decision on how to best go about the API. The only thing that should be avoided is that it is possible to misuse it (i.e. getting the request order mixed up, resulting in either wrong data or in a dead lock).

void makePipelinedRequests()
{
    auto client = new HTTPClient;
    runTask({ client.request("http://www.example.org/a", (scope req){}, (scope res){ ... }); });
    runTask({ client.request("http://www.example.org/b", (scope req){}, (scope res){ ... }); });
    runTask({ client.request("http://www.example.org/c", (scope req){}, (scope res){ ... }); });
}

In vert.x pipelining does not affect the API: the pipelining happens at the client's Connection object (which holds a request queue to store the order of reqs), and people don't have to know it. So people call the api the same way as not doing pipelining.

As far as I can see the vert.x client uses an asynchronous callback instead of a synchronous one. Basically that would be a wrapped up version of the code example above, but with the disadvantage that usually a GC allocation is necessary for the closures that are passed as callbacks, which can be avoided in the synchronous case when they are scoped. And of course it is asynchronous, which makes synchronizing the different requests slightly more difficult (Task.join can be used in the example above).

void requestAsync(URL url, void delegate(scope HTTPClientRequest) req_handler, void delegate(scope HTTPClientResponse) res_handler)
{
    runTask({ request(url, req_handler, res_handler); });
}

But then again, why not just have both and allow requesting from different tasks but also provide a convenient asynchronous wrapper for the cases where it makes sense...

Still, what would you think about doing it the vertx way? I do quite like the approach although it requires heap allocation.

Well, since it is just a wrapper function, why not... But the underlying
mechanism should still work without GC allocations, since those can have
very nasty side effects, at least in high-load situations (due to pauses
for GC collection runs).