On Sun, 07 Feb 2016 22:47:46 GMT, Márcio Martins wrote:
On Sun, 07 Feb 2016 14:43:16 GMT, Carl Sturtivant wrote:
I'm curious: did you consider the possibility of forcing a GC periodically to preempt big collections, and if so, what behavior did that lead to?
I did spend a bit of time on it, but with enough traffic, there is never a good moment to collect, so you just do it at regular intervals, but it didn't help as it just took too long, i.e. more than 2 secs, which is simply unacceptable, and my watchdog just kills the process for being unresponsive. What I ended up doing was tweaking the GC and basically only doing GC when there was no more physical memory, and this let the process live for about 4 hours at a time; It obviously doesn't scale, and that's when I decided to put it all behind nginx. At this time I don't have vibe's manual memory management active, which would help tremendously, but because of some hard to track problems with that, I prefer to have it off, for the time being...
At the moment, the processes live for over 2 days on average when we let them (we deploy frequently so they almost never "crash" due to the GC).Mind you we don't do a lot to prevent generation of garbage in the frontend code... We try as best we can when it is convenient, and a little harder in library/backend code, but in general, we really use D + vibe as an advanced and efficient scripting language :)
Interesting. If the GC was better that would be a near perfect approximation! :) I think it's understood that D needs a better GC. Thanks again.