RejectedSoftware Forums

Sign up

Pages: 1 2

Re: Correct setup of redis?

On Sat, 21 Sep 2013 13:47:05 GMT, simendsjo wrote:

On Sat, 21 Sep 2013 13:38:34 GMT, simendsjo wrote:

On Sat, 21 Sep 2013 12:51:53 GMT, simendsjo wrote:

On Sat, 21 Sep 2013 12:10:42 GMT, simendsjo wrote:

On Sat, 21 Sep 2013 11:52:12 GMT, simendsjo wrote:

On Sat, 21 Sep 2013 12:07:58 +0200, Sönke Ludwig wrote:

Am 20.09.2013 22:25, schrieb simendsjo:

On Fri, 20 Sep 2013 19:36:19 GMT, simendsjo wrote:

On Fri, 20 Sep 2013 19:31:21 GMT, simendsjo wrote:

What is the correct way of using redis?

Oh.. And fetching a value from redis is actually slower than mariadb - why is that?
From mariadb takes ~40ms, while redis takes 40-100. All I do is get/set.

Building with -profile doesn't work, so I'm unable to find out why it's slow.
Could it be the calls to format() is RedisConnection.request?

Without looking at the source at all a wild guess would be that there
is a flush() missing after sending each command and even with that
enabling tcpNoDelay may be necessary.

Just tried both, but there is no difference.

I found the slow line:

auto ln = cast(string)m_conn.readLine();

in RedisReply.this.

Seems this is calling ufcs readLine, which in turn calls readUntil(InputStream..

Some more details. It's all in readUntil `while(!stream.empty)`.
steam.empty is whats causing all the problems.

Ok, some further investigations reveal libevent2tcp.d -> leastSize -> mctx.core.yieldForEvent()

Well.. Seems all the work is happening in Fiber.yield() - this is druntime specific, right?

So far, assuming that the timing for Fiber.yield() is done around the call site, this just implies that the time is used to wait for a "read" event on the socket (at least it's highly unlikely that Druntime itself is taking the time, albeit not impossible). The question now is, is the DB client or the server at fault. Maybe recording the exchanged packets using wireshark and looking at the timestamps gives a clue. My guess would still be that after the request was written by the DB client, libevent or the TCP sub system somehow delays sending the actual packet.

When the current regressions are resolved and this is still unresolved, I'll join in and also take a closer look.

Re: Correct setup of redis?

On Sun, 22 Sep 2013 12:36:43 GMT, Sönke Ludwig wrote:

On Sat, 21 Sep 2013 13:47:05 GMT, simendsjo wrote:

On Sat, 21 Sep 2013 13:38:34 GMT, simendsjo wrote:

On Sat, 21 Sep 2013 12:51:53 GMT, simendsjo wrote:

On Sat, 21 Sep 2013 12:10:42 GMT, simendsjo wrote:

On Sat, 21 Sep 2013 11:52:12 GMT, simendsjo wrote:

On Sat, 21 Sep 2013 12:07:58 +0200, Sönke Ludwig wrote:

Am 20.09.2013 22:25, schrieb simendsjo:

On Fri, 20 Sep 2013 19:36:19 GMT, simendsjo wrote:

On Fri, 20 Sep 2013 19:31:21 GMT, simendsjo wrote:

What is the correct way of using redis?

Oh.. And fetching a value from redis is actually slower than mariadb - why is that?
From mariadb takes ~40ms, while redis takes 40-100. All I do is get/set.

Building with -profile doesn't work, so I'm unable to find out why it's slow.
Could it be the calls to format() is RedisConnection.request?

Without looking at the source at all a wild guess would be that there
is a flush() missing after sending each command and even with that
enabling tcpNoDelay may be necessary.

Just tried both, but there is no difference.

I found the slow line:

auto ln = cast(string)m_conn.readLine();

in RedisReply.this.

Seems this is calling ufcs readLine, which in turn calls readUntil(InputStream..

Some more details. It's all in readUntil `while(!stream.empty)`.
steam.empty is whats causing all the problems.

Ok, some further investigations reveal libevent2tcp.d -> leastSize -> mctx.core.yieldForEvent()

Well.. Seems all the work is happening in Fiber.yield() - this is druntime specific, right?

So far, assuming that the timing for Fiber.yield() is done around the call site, this just implies that the time is used to wait for a "read" event on the socket (at least it's highly unlikely that Druntime itself is taking the time, albeit not impossible). The question now is, is the DB client or the server at fault. Maybe recording the exchanged packets using wireshark and looking at the timestamps gives a clue. My guess would still be that after the request was written by the DB client, libevent or the TCP sub system somehow delays sending the actual packet.

When the current regressions are resolved and this is still unresolved, I'll join in and also take a closer look.

40ms when fetching from the database isn't too strange. It's 40ms from redis that baffles me.

Pages: 1 2