On Fri, 02 Jun 2017 23:55:21 GMT, littlerussian wrote:

I investigated a little more, with a WireShark capture during the data transfer.

The result is that when the server sends the data to the client, on the wire I see a sequence of the following:

(...)

Given this I can suppose that the tokenizer is on the server side... do you know any implementation detail of your library that might lead to this behaviour?

I tracked this down to a StreamOutputRange that is actually intended to avoid sending the web socket frame header individually: https://github.com/rejectedsoftware/vibe.d/blob/321e4d85f5abbbda012c63d376deb32eefcb39d0/http/vibe/http/websockets.d#L821
Unfortunately it uses an internal 256 byte accumulation buffer, so this strategy breaks down for anything larger than that.

A first improvement was to change StreamOutputRange.put to write at most two TLS records in this case - one for the header and one for the full chunk of data: e1df68e

Unfortunately, due to limitations of the OpenSSL API, it seems like the only way to reach the optimum of one TLS record per WebSocket frame is to write the header and message body into a single buffer upfront, meaning one additional memory allocation per sent frame, which is quite sub optimal w.r.t. memory use and copy operations.

However, the current implementation already uses an Appender!(ubyte[]) to buffer incoming frame data. If that would always pre-allocate the maximum frame header size, the header could instead be injected there, and the StreamOutputRange could be dropped. I've opened a ticket for this: #1791

This will only work until the WebSocket module is rewritten to avoid dynamic memory allocations, which will happen mid-term. At that point, some other solution needs to be found.