On Thu, 04 Jul 2013 10:19:07 GMT, Dicebot wrote:

On Wed, 03 Jul 2013 14:15:34 GMT, Sönke Ludwig wrote:

I see. However, at least with current hardware, compared with all the other things that happen during a request this will still be a small fraction of the total CPU time (as it is highly unlikely to ever have actual contention for the MemorySessionStore lock, it should be well below 1000 CPU cycles for a single lock/unlock).

Your are 100% right if you are speaking about raw throughput on low-to-normal concurrency targets. But I am speaking about c10k and up from there, not interested in anything more simple :P Time for single locking operation grows linearly with locking operation count, in other words, with simultaneous request count (as each request implies at least one locking operation here). I truly believe that vibe.d is closest of all competitors for reaching this target on carefully crafted non-synthetic applications, don't want to spoil the opportunity.

But it grows only once contention happens. As long as the lock duration << average request duration, this is not likely. But I agree that it may become an issue once you are on many cores and have really short request times. I think this is a relatively special scenario, but then again that of course doesn't make it less worthwhile to support...

But apart from this I also suppose that in many large-scale scenarios sessions will be stored in an external database (such as Redis), which is then possibly distributed among multiple servers. Compared to the I/O overhead there, a simple atomic CAS will always be negligible.

Does it really work without some form of local caching? I have no web dev experience, unfortunately, but sounds like communication with external database may soon become a weak spot then. Can you provide any more input on this?

I don't really know more details, but local caching would compromise (at least any hard) data consistency when multiple database clients are involved, so that would probably not work in many setups. But if you take into consideration that there are actually real sites running on Ruby, that little network overhead is absolutely negligible as long as the database is fast ;)

Speaking with you has inspired me with one possible architectural proposal - what about splitting SessionStore in two parts - SessionStore and SessionStoreCache, latter being always thread-local one and former implementing some sort synchronisation facilities for a background task. SessionStore here be shared memory store or external database - it does not really matter. Does vibe.d event model provide convenient means to fire such low-priority sync event for every worker thread?

Funnily, I was writing almost the same thing before finally reading your second paragraph and then dumping it ;)

The SessionStoreCache could then also offer different consistency models to optimize for speed when the data model allows it. Sounds like a good way to go no matter what the outcome of the distribution strategy investigation will be.