On Fri, 04 Jan 2013 17:39:54 GMT, Sönke Ludwig wrote:

One way to achieve this would be to use a timer and and a signal. The requests are all run in parallel in different tasks/fibers using runTask and the original task will wait until either the timeout is reached or all requests have been answered. The only drawback is that all the late requests will continue to run after the timeout - it shouldn't have any serious impact on anything though, their connections will time out eventually or the request is finished a bit later.

Thanks :-) The timer and signal mechanism is exactly what we were expecting.

But there is still a question, we have to use keep-alive connection and even http pipelining for better performance,
but in this senario, those surviving replies after the timeout will affect later requests. because in a keep-alive and pipelined connection, a new request won't be sent until the last response is replied(otherwise the ordering of http req/resp is messed up),

So if we have a huge incoming requests that fill up all connections in the connection pool, and some of them is timed out,
then new requests would have to wait for those timed out replies to come back, which is unnecessary, and will waste their own precious time, because their timer is already running on! Is there a way to abort a httpclient request without closing the current connection and affecting the connection pool's ability to immediately handler new incoming requests?

One way would be to make the connection pool more elastic to handle this, it could keep a timer on each connection,
and when a connection is timed out, it won't receive new requests for the time being, and the request goes to a 'availale' connection.
If there is no 'available' connection, which means current connections can't handle the traffic, the pool could create new connections,
and or abort the oldest connections(because it is already timed out, so the current request on that connection is useless).

I know this seems a very strange requirement, but our first priority is throughput and quick response, all timed out requests
should be throw away.

It's planned for later to have more facilities for controlling tasks (e.g. waiting for their end or terminating them) and also to have a generic broadcast class that could be made general enough to handle this situation. I think such a broadcast class is quite important because generally it shouldn't be necessary to work with such low-level details such as rawYield. But I cannot really say when I'm able to get that done...

I don't quite understand emit() and rawYield() yet, I'll try the code later :-)

Btw. using requestHttp() will keep a connection open to each server automatically using a ConnectionPool internally. So there is no need to explicitly store HttpClient instances.

I have a question: each time requestHttp() is called for the same server, will it create a new ConnetionPool? or somehow it manages
to reuse a ConnectionPool the first time a requestHttp() is called for this server?

string[] servers;

void handleRequest(HttpServerRequest req, HttpServerResponse res)
{
	// if the body is JSON and JSON parsing is enabled
	// in the HttpServerSettings, this will need to get
	// req.json.toString() instead.
	auto body = req.bodyReader.readAll();

	// run all broadcast requests as separate tasks
	string[string] replies;
	auto tm = setTimer(dur!"seconds"(10), null);
	auto sig = createSignal();
	foreach( srv; servers )
		runTask({
			auto cres = requestHttp("http://"~srv~"/url", (creq){
				creq.method = req.method;
				creq.bodyWriter.write(body);
			});
			replies[srv] = cres.bodyReader.readAllUtf8();

			// wake up the original fiber
			sig.emit();
		});

	// yield until either the timeout is reached or all replies are collected.
	// the timer and the signal will both cause rawYield() to continue.
	tm.acquire();
	while( tm.pending || replies.length < servers.length )
		rawYield();
	tm.release();
	
	// save the current replies (other requests might still come in later)
	auto saved_replies = replies;
	replies = null;

	// do something with saved_replies and respond to the original request...
}

Please bear with me, I haven't tested the code, so it may very well contain some mistakes.

Regards