RejectedSoftware Forums

Sign up

Concurrency helper function - concurrently

I was thinking about a convenience function for concurrency with
runWorkerTask(..) and tuples

Sometimes, you only want to get some data from a database or a remote
host concurrently because they're blocking operations (think
simultaneous downloads vs sequential), but you don't want to write the
boilerplate to make it isolated or to receive the info. This solution
would be the concurrently function

void RequestHandler(HTTPServerRequest req, HTTPServerResponse res){

	alias ConcurrTuple = Tuple!(string, "name", bool, "auth" 

Variant[Variant], "dbrow");

SomeController ctl;
auto obj = concurrently!ConcurrTuple(
	{ return redis.get!string("name"); }
	toDelegate(&ctl.isAuth),
	{
		auto conn = pdb.lockConnection();
		auto cmd = new PGCommand(conn, "SELECT * FROM users WHERE userid=" ~ 

params.uid);

auto result = cmd.executeQuery;
return result;

});
// unblock here

auto name = obj.name;
auto authentified = obj.auth;
auto userInfo = obj.dbrow;
}

The concurrently function would take care of scanning the delegates for
closures and making them isolated, then run the worker tasks and join them.

Does this seem do-able / practical to anyone else too?

Re: Concurrency helper function - concurrently

The formatting was too messed up, I sent with ThunderBird :/ Here's the clean version

void RequestHandler(HTTPServerRequest req, HTTPServerResponse res){
	alias ConcurrTuple = Tuple!(string, "name", bool, "auth", Variant[Variant], "dbrow");
	SomeController ctl;
	auto obj = concurrently!ConcurrTuple(
		{ return redis.get!string("name"); }
		toDelegate(&ctl.isAuth),
		{
			auto conn = pdb.lockConnection();
			auto cmd = new PGCommand(conn, "SELECT * FROM users WHERE userid=" ~ params.uid);
			auto result = cmd.executeQuery;
			return result;
	});
	// unblock here
	auto name = obj.name;
	auto authentified = obj.auth;
	auto userInfo = obj.dbrow;
}

Re: Concurrency helper function - concurrently

On Sun, 26 Jan 2014 04:44:31 GMT, Etienne Cimon wrote:

The formatting was too messed up, I sent with ThunderBird :/ Here's the clean version

void RequestHandler(HTTPServerRequest req, HTTPServerResponse res){
	alias ConcurrTuple = Tuple!(string, "name", bool, "auth", Variant[Variant], "dbrow");
	SomeController ctl;
	auto obj = concurrently!ConcurrTuple(
		{ return redis.get!string("name"); }
		toDelegate(&ctl.isAuth),
		{
			auto conn = pdb.lockConnection();
			auto cmd = new PGCommand(conn, "SELECT * FROM users WHERE userid=" ~ params.uid);
			auto result = cmd.executeQuery;
			return result;
	});
	// unblock here
	auto name = obj.name;
	auto authentified = obj.auth;
	auto userInfo = obj.dbrow;
}

Maybe it was just a suboptimal example, but since I/O should usually use async I/O, simply using runTask instead of runWorkerTask should suffice here, so no Isolated or shared would be necessary.

But a very similar and slightly more general concept are "promises" or "futures". Those in two flavors - one for normal tasks and one for worker tasks - would be a valuable addition for vibe.core.concurrency for sure. I didn't use them much in practice personally, but they are also a very nice way to add concurrency to an existing code base.

Re: Concurrency helper function - concurrently

On Mon, 27 Jan 2014 14:21:13 GMT, Sönke Ludwig wrote:

Maybe it was just a suboptimal example, but since I/O should usually use async I/O, simply using runTask instead of runWorkerTask should suffice here, so no Isolated or shared would be necessary.

But a very similar and slightly more general concept are "promises" or "futures". Those in two flavors - one for normal tasks and one for worker tasks - would be a valuable addition for vibe.core.concurrency for sure. I didn't use them much in practice personally, but they are also a very nice way to add concurrency to an existing code base.

Yes it was a bad example, I like to think it would be effective to run long computations as well without thinking too much about it.

I know futures very well from the C++11 specs and Qt, while I may not be the best working with them I get very passionate very easily when they're involved

http://en.cppreference.com/w/cpp/thread/future
https://qt-project.org/doc/qt-5/qtconcurrent-index.html

I prefer Qt's implementation, where you can poll for progress, pause, cancel, synchronize them, etc.

In vibe.d, I think all futures would have to be spawned by a dedicated task that communicates back and forth through a task message queue. Using some CTFE, the main task could synchronize them together, and assuming they mutate shared data, that part could be optimized at compile-time automatically, with atomicity in ways that are causing heavy headaches to a lot of people right now. There's a lot of features from D that could open the door to some very complex atomic operations. It could become one of the most significant strengths imo.

Re: Concurrency helper function - concurrently

On 2014-01-28 13:31, Etienne wrote:

In vibe.d, I think all futures would have to be spawned by a dedicated task that communicates back and forth through a task message queue.

I was reading this
http://www.fieryrobot.com/blog/2010/06/27/a-simple-job-queue-with-grand-central-dispatch/

and had a few ideas that I wanted to keep written here as a kind of journal of crazy ideas on the subject. It's not a feature request, just some ideas.

The idea is to either create a specific task for each mutable variable which handles the operations and operate through task message queue, or an access queue for them in the central task and operate with it through task message queue. I think it mostly depends on the cost of context switching. Tasks would have priority levels, the central task (dispatcher) having a 100 priority level means it's called every time another task in its thread yields.

Worker tasks communicate their atomic operations to the central task through the task message queue, this operation is adjusted to a predicted order of operation.

The goal would be to have an algorithm that automatically optimizes algorithms for concurrency, e.g. give it the fibonacci sequence algorithm and have it saturate the cpus with worker tasks.