Skip to end of metadata
Go to start of metadata


Currently this plugin is developed and tested against RabbitMQ 3.3.5. There could be problems occurring depending on the RabbitMQ version you are using Ariane with. If you're in such case contact us.
This documentation page describe the RabbitMQ plugin which basically extends the Ariane Core Directory and add RabbitMQ injectors intelligence into Ariane (responsible for getting the data from your RabbitMQ infrastructure, transforming them and pushing them into the Mapping DB). You can use the documentation to inspire and write other internal(*) injectors based plugins...

RabbitMQ plugin is divided into three submodules we will describe better in the rest of this doc : 

  • the directory module
  • the jsonparser module
  • the injector module

(*) NOTE: we're speaking about internal injectors here because the active runtime of these injectors - and contrary to external injectors - are in the Ariane Core Server RunTime.
We can surely debate on the opportunity to distribute these computing - and in fact we do @echinopsii (wink) - but currently we are using the JVM thread performances as much as possible which in an other hand really simplify the deployment.

Directory entities (directory module)

Ariane RabbitMQ plugin add the following entities to directory :

  • RabbitMQ nodes where you define URL and authentication to access to the internal RabbitMQ configuration
  • RabbitMQ cluster (which is just the definition of the cluster name and a RabbitMQ node set)


Gears and Components

Basically each RabbitMQ injector gears are administrable AKKA actor you can stop or start from Ariane web UI. Each RabbitMQ injector gears are defined to interact with one datasource only to avoid multiple datasource lock into one thread.

So we have the following kind of gears : 

  • the directory gear which is responsible to look at directory to check if there is new RabbitMQ instances (cluster or single node) to analyse. This gear is responsible to start sniffing gears (see bellow). 
    Period between two directories check is configurable.
  • the component gears are responsible to get the RabbitMQ instances configurations data and push them to persisted cache (Infinispan - local file) through Component object. Each component object have several fields describing the data sniffed from RabbitMQ but an important point to keep in mind here is that there are two fields for the same RabbitMQ configuration data : 
    • one field for the last sniff
    • one field for the new sniff
    That way we're simplifying a lot the diff algorithm between the two sniffs.
    Period between two sniffs is configurable.
    When sniff is done the injector gears are sending message to the messaging gear (akka memory tell)
  • the mapping gear which is responsible to get data coming from the sniffing, do the diff between last sniff and new sniff and transform the RabbitMQ configurations to fit the Messaging DB model. 

Mapping transformations

So to fit our mapping needs we transform the RabbitMQ objects into supported mapping objects.

Here is the basic mapping transformation table : 

RabbitMQ ObjectMapping ObjectNotes
RabbitMQ ClusterCluster ! (smile) 
RabbitMQ Node


As an administrable component RabbitMQ Node is definitively a container (wink)
RabbitMQ VHostNode (owned by a RabbitMQ Node)[container:RabbitMQ Node] -owns-> {node:RabbitMQ VHost}
RabbitMQ QueueNode (owned by a RabbitMQ VHost){node:RabbitMQ VHost} -owns-> {node:RabbitMQ Queue}
RabbitMQ ExchangeNode (owned by a RabbitMQ VHost){node:RabbitMQ VHost} -owns-> {node:RabbitMQ Exchange}
RabbitMQ BindingEndpoint (owned by a RabbitMQ Exchange or Queue){node:RabbitMQ Queue}  -owns->  (endpoint:RabbitMQ Binding)


{node:RabbitMQ Queue}
 -owns->  (endpoint:RabbitMQ Binding)

NOTE : queue and exchange are then linked through binding endpoints and memory link 
RabbitMQ Channel + ConnectionEndpoint (owned by a RabbitMQ Queue or Exchange or
RabbitMQ Client Consumer or Producer)
Here we can extend what we see from a RabbitMQ node to define client container / consumer and producer.
We decided to merge RabbitMQ Channel + Connection as a endpoint to provide application links between these endpoints. eg :

[container:RabbitMQ Client process] -owns-> {node:RabbitMQ Client Consumer} -owns-> (endpoint:RabbitMQ Channel+Connection)
(endpoint:RabbitMQ Channel+Connection) <-owns- (node:RabbitMQ Queue) 


[container:RabbitMQ Client process] -owns-> {node:RabbitMQ Client Publisher} -owns-> (endpoint:RabbitMQ Channel+Connection)
(endpoint:RabbitMQ Channel+Connection) <-owns- {node:RabbitMQ Exchange}  

Finally, to link cluster node each other we are defining a specific cluster node with specific endpoints and links on each RabbitMQ node.

Final rendering



This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.



  • No labels