RejectedSoftware Forums

Sign up

Pages: 1 2

Re: Expose and consume Thrift services

On Wed, 04 Jun 2014 16:12:18 GMT, David Nadlinger wrote:

Multiplexing multiple services onto one port/connection was only added very recently to Thrift, i.e. after I wrote the D implementation. Nobody has updated the D library to support it since, though it should not be too hard to add (see https://issues.apache.org/jira/browse/THRIFT-1915). How do you envision this to interact with the routing mechanism?

The word 'routing' was wrongly choosen. I meant just implementing the multiplexing protocol and expose it cleanly, maybe something like that:

alias Services = VibedThriftMultiplexedService!("foo", Foo, "bar", Bar);
listenThrift!Services(thriftServerSettings);

Re: Expose and consume Thrift services

On Thu, 05 Jun 2014 08:45:47 GMT, Tavi wrote:

interface Foo { void foo(); }
class FooImpl { override void foo {} }
	
shared static this {
  auto thriftServerSettings = new VibedThriftServerSettings;
  thriftServerSettings.port = 9090;
  thriftServerSettings.bindAddresses = ["::1", "127.0.0.1"];
 
  // use default TBinaryProtocol, TBufferedTransport
  alias FooService = VibedThriftService!(Foo, FooImpl); // use default TBinaryProtocol, TBufferedTransport
 
  listenThrift!FooService(thriftServerSettings);
}

For each connection a new FooImpl processor object will be created. Maybe using a free list (or typed custom allocator) would be better.

I'm not sure if I agree with your rationale for creating one service instance per connection. In the Thrift universe, a service is semantically something stateful which clients can invoke methods on via a stateless connection. In this regard, Thrift is actually very similar to a REST architecture. For example, the instance implementing Foo would typically access some database or your in-process domain model.

Thus, there needs to be some way for whatever processes the client request to access that state. In the vanilla Thrift library (in the C++, Java and, by extension, also D implementation), this is achieved by having one instance of the service interface implementation per server.

Now, of course, in some cases, keeping the connections entirely stateless is just not enough. You mentioned one of them, authentication. One way, of course, would be to push all authentication concerns down to the transport layer. This precludes for example simply using RPC calls for "logging in", though (and let us ignore the question on whether this is a good pattern in the first place for now). For this reason the vanilla Thrift implementation, if conceptually geared towards a stateless model, offers an opportunity to attach contextual information to a connection, either by writing a custom TProcessorFactory to create a processor instance per connection, or by using a TProcessorEventHandler. You could write a custom

I'm almost certain that it is possible to find a more natural, less "Java-ish" design for this in the context of the Vibe.d I/O model. However, it will need to address the question of how to share state somehow.

David

Re: Expose and consume Thrift services

On Thu, 05 Jun 2014 15:09:57 GMT, David Nadlinger wrote:

I'm not sure if I agree with your rationale for creating one service instance per connection. In the Thrift universe, a service is semantically something stateful which clients can invoke methods on via a stateless Connection

...

Now, of course, in some cases, keeping the connections entirely stateless is just not enough. You mentioned one of them, authentication.

...

For this reason the vanilla Thrift implementation, if conceptually geared towards a stateless model, offers an opportunity to attach contextual information to a connection, either by writing a custom TProcessorFactory to create a processor instance per connection, or by using a TProcessorEventHandler.

I'm almost certain that it is possible to find a more natural, less "Java-ish" design for this in the context of the Vibe.d I/O model. However, it will need to address the question of how to share state somehow.

Well, it is precisely this shared state that I want to avoid. As long as it's transparent for the service clients, we do not have to follow the vanilla Thrift server implementations. In a non-blocking, potentially multi-threaded environment, shared state is pain. I am a fan of the actor model, where every entity that has state, is encapsulated in an actor instance, that may execute code (process messages) only serially, but in parallel with any other actors. I see a (multi)service processor as such an actor entity, the method invocations are (literally) messages passed to it, and their processing is done serially. Each client connection is represented by one processor instance, may have some state, and being tcp-connection bound, the messages/method-calls arrive and are dispatched serially. The processor may communicate with the outer-system using the async I/O, either with the database (async with vibe.d sockets: ddb, mysql-native), or with other Thrift services, or with HTTP (REST) servers. A service that is representing global domain state, should not be available directly on the public network, and will have nothing to do in an vibed async environment. Such a (multi)service would be in a different process/thread/machine, hosted in a vanilla Thrift server (although, there should be some change too, right now a connection takes over the server/processor and the client requests are not processed fifo-wise)

Re: Expose and consume Thrift services

I thought a bit about a service based architecture around vibe's async model. A few points were already discussed here: the sketched implementation of a service, where for each client connection a different handler object was created and the service skeleton from David where a single handler object was used.
Now I would like to sketch a service where the handler is a singleton (not quite finished and not integrated in the thrift server model). This would use a proxy/stub model in order to guarantee that no client request is handled during the processing of another request.
The vision would be to have different proxy/stubs that would allow simple implementation and invocation of services, like simple method calls, be they on a separate fiber, separate thread or a different process/machine. So a service method call would yield the current fiber and, if the method is not an one-way, will wait for the response. This waiting for the response must be done with a priorityReceive, that whould check only the priority message queue. Thus the client requests are serialized but responses from subsequent service call responses can be received. The tricky part would be to avoid dead-locks (maybe through a global service registry to trace the call graph). Also, the excesive copying of the arguments by message sending should be avoided someway.

So, back to the skeleton for the same process, one fiber for the service handler, multiple fibers for the clients.

Having this service definition (generated from the sample thrift idl):

enum Operation {
  Add = 1,
  Substract = 2,
  Multiply = 3,
  Divide = 4
}

struct Work {
  int x;
  int y;
  Operation op;
}

interface Calculator {
  int calculate(ref const(Work) w);

  enum methodMeta = [
    TMethodMeta(`calculate`, 
      [TParamMeta(`w`, 1)]
    )
  ];
}

would generate such a proxy class, for clarity the template parameter is replaced:

class ServiceProxy(Calculator) : Calculator
  Tid _tid; // The task id where the actor is running
  this(Tid tid) { _tid = tid; }

  /* Send a request to the stub */
  auto sendRequest(alias Method, Args...)(Args args) 
  {
    send!(Method.RequestMessage, Tid, Method.requestParamTypes)(_tid, Method.RequestMessage(), Task.getThis(), args);
    static if(!is(Method.RetType == void)) 
    {
      // Wait for the response
      // TODO: use priorityReceive() to get only from the priority message queue. 
      //       on the normal queue may come other client requests.
      Method.RetType retValue;
      receive(
          (Method.ResponseMessage, Method.RetType r) { retValue = r; },
          (Method.ResponseMessage, shared(Exception) exc) { throw exc; }
      );
      return retValue;
    }
  }

  /* 
    Generated methods, implementing the interface
    mixin(ServiceProxyStub!(Iface).proxyCode)
  */
  int calculate(ref const(Work) w)
  {
    return  sendRequest!(MethodInfo!("calculate", calculate))(w);
  }
  
}

The stub part that runs in its own task/fiber, here again with the template parameter replaced with the concrete type and including the generated code.

class ServiceStub(Calculator) : WhiteHole!Calculator
{
static:
  /* The stub task/fiber function of the service */
  public void serviceStubFunc(Calculator handler)
  {
    receive(
      /*
        Generated code 
        mixin(ServiceProxyStub!(Iface).stubCode)
      */
      stubDelegateRequestResponse!(MethodInfo!("calculate", calculate), typeof(&handler.calculate))(&handler.calculate) 
    );
  }

  /* Stub delegate to handle a request and send back the result.
    It will be used with concurrency.receive in the stub fiber function. 
    The Handler is the delegate of the service handler object. 
    It may have slightly different arguments than those in Args..., 
    because we removed any type qualifiers for sending/receiving. */
  private auto stubDelegateRequestResponse(alias Method, Handler)(Handler handler)
  {
    return (Method.RequestMessage, Tid proxyTid, Method.requestParamTypes args) 
    { 
      try 
      {
        // Invoke the handler and send back the result to the proxy
        prioritySend(proxyTid, Method.ResponseMessage(), handler(args));
      } 
      catch(shared(Exception) e) 
      {
        // Send the exception back to the proxy
        prioritySend(proxyTid, Method.ResponseMessage(), e);
      }
    };
  }

  /* Stub delegate to handle an one way request.
    It will be used with concurrency.receive in the stub fiber function. 
    The Handler is the delegate of the service handler object. 
    It may have slightly different arguments than those in Args..., 
    because we removed any type qualifiers for sending/receiving. */
  private auto stubDelegateOneWayRequest(alias Method, Handler)(Handler handler)
  {
    return (Method.RequestMessage, Tid proxyTid, Method.requestParamTypes args) 
    { 
      try 
      {
        handler(args);
      } 
      catch(shared(Exception) e) 
      {
        // TODO: log
      }
    };
  }
}

Client code would be like this (must be from a task/fiber in order to receive the response):

auto calc = runService!CalculatorHandler();
Work w;
w.x = 40;
w.y = 2;
w.op = Operation.Add;
int result = calc.calculate(w);
writeln(result);

What are your thoughts about such a method call dispatching using vibe's concurrency (priority)send/receive?

Pages: 1 2