On Sun, 09 Aug 2015 19:10:23 GMT, Etienne Cimon wrote:
Which libasync version did you use for this?
I am using vibe-d-0.7.24-rc.2 and dub.selections.json file reports libasync at 0.7.5.
$ cat dub.selections.json
{
"fileVersion": 1,
"versions": {
"memutils": "0.4.1",
"vibe-d": "0.7.24-rc.2",
"libevent": "2.0.1+2.0.16",
"openssl": "1.1.4+1.0.1g",
"libev": "5.0.0+4.04",
"libasync": "0.7.5"
}
}
$ uname -a
Linux AW-LAPTOP 4.1.0-040100-generic #201506220235 SMP Mon Jun 22 06:36:19 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
I tried using siege and httperf. Both do not keep the connection alive.
$ siege -b -c1 'http://localhost:8090/file.txt'
** SIEGE 3.0.8
** Preparing 1 concurrent users for battle.
The server is now under siege...^C
Lifting the server siege... done.
Transactions: 158145 hits
Availability: 100.00 %
Elapsed time: 12.62 secs
Data transferred: 1.96 MB
Response time: 0.00 secs
Transaction rate: 12531.30 trans/sec
Throughput: 0.16 MB/sec
Concurrency: 0.90
Successful transactions: 158145
Failed transactions: 0
Longest transaction: 0.01
Shortest transaction: 0.00
$ httperf --port=8090 --uri=/file.txt --num-conns=5000 --num-calls=1
httperf --client=0/1 --server=localhost --port=8090 --uri=/file.txt --send-buffer=4096 --recv-buffer=16384 --num-conns=5000 --num-calls=1
httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
Maximum connect burst length: 1
Total: connections 5000 requests 5000 replies 5000 test-duration 0.243 s
Connection rate: 20589.1 conn/s (0.0 ms/conn, <=1 concurrent connections)
Connection time [ms]: min 0.0 avg 0.0 max 1.1 median 0.5 stddev 0.0
Connection time [ms]: connect 0.0
Connection length [replies/conn]: 1.000
Request rate: 20589.1 req/s (0.0 ms/req)
Request size [B]: 70.0
Reply rate [replies/s]: min 0.0 avg 0.0 max 0.0 stddev 0.0 (0 samples)
Reply time [ms]: response 0.0 transfer 0.0
Reply size [B]: header 244.0 content 13.0 footer 0.0 (total 257.0)
Reply status: 1xx=0 2xx=5000 3xx=0 4xx=0 5xx=0
CPU time [s]: user 0.04 system 0.20 (user 14.8% system 84.0% total 98.8%)
Net I/O: 6574.8 KB/s (53.9*10^6 bps)
Right now, I am unsure whether it is useful to spend time on this (vs other stuff) as, I assume, the problem can only be noticed when IO is not the bottleneck, which is a case that is hard to face in practice.
But I am still willing to look further into this if you want me to.