Portico Architectural Overview

Portico Architectural Overview
The Portico RTI employs a modular, flexible architecture designed to support the augmentation or replacement of behaviour at a number of points throughout the framework. A simple plug-in framework allows any developer to alter the way Portico behaves, or extend it to provide new behaviour with as little effort as possible. Beyond the actions the RTI takes, the protocol or communication mechanism used by a federate to communicate with the RTI server can also be changed quickly and easily, with plug-ins providing for the possibility of supporting new and innovative communications mechanisms into the future.

Although the basic infrastructure used by Portico is designed to facilitate HLA activities, the fact that it acts as an RTI at all is due to the default set of plug-ins that Portico ships with. Figure 1 below provides a birds-eye view of the Portico architecture:



In the image above, there are five points at which plug-ins can load their own logic. These are:


 * The lrc-request sink: When a call is made to the  it is turned into the appropriate message type and sent to this sink. The general role of handlers in this sink is to perform any validation that can occur on the LRC side before sending the request to the RTI.
 * The rti-request sink: This sinks processes all incoming messages from any relevant remote LRCs. If any asynchronous work needs to occur, a new request object needs to be created and placed on the "Action Queue" for later processing.
 * The rti-action sink: This is where any federation-wide work is conducted. Requests are taken off the "Action Queue" and processed. If that processing results in the need to send callback messages to the federates, a callback message is created and placed in the "Callback Queue"
 * The lrc-callback sink: Callbacks are released from the LRC Queue when a federate calls the  method on an   instance. These messages are passed through the lrc-callback sink where they should trigger the appropriate   callback.
 * The LRC and RTI connections: Conceptually, these are two parts of the same component. The communications mechanism used between the LRC and the RTI is hidden behind this facade. These classes are for sending messages to and from the RTI over whatever protocol the implementation happens to use.

If you want to alter the way the RTI responds to (for example) time advance requests, you could add an additional handler for that message type into the appropriate sink (or sinks, if changes are needed in multiple locations).

Connections are the odd-one-out in the list above. Together, the connections form part of a "communications binding". It is these bindings that define how an LRC communications with an RTI (and vice versa). Unlike other RTI implementations, the Portico communications facilities and protocols are not fixed, nor does their implementation reach into the rest of the framework. Rather, all communications specific code is hidden behind two facades, and the binding used can be changed via a single system property.

When an LRC starts up, it consults a system property to see what connection class it should be using. The way that connection class communicates with the RTI is entirely dependent on its implementation. It could communicate over TCP, though a database connection, via Web Services, over Jabber, through SMTP or via any other protocol/communications mechanism, as long as a communications binding exists, and the RTI server supports it. For more information on bindings, see the Portico Bindings document.

Summary
The important point to take away from this brief introduction to the Portico architecture is that there are many places into which a developer can insert their own code. The plug-in facilities provide a quick and easy way to implement and deploy new features. Each of the points mentioned above acts as a kind of wall-socket into which you can plug your own special appliances (whether you write them yourself, or obtain them from a 3rd party).

How Messages Flow Through the Framework
To get a better understanding of how each of the components above fit together, this section outlines some example usage scenarios, demonstrating how messages flow through the Portico framework.

Processing User Requests in the LRC
Throughout this section, we will follow the progression that occurs when a federate calls  on their. Figure 2 shows how the incoming call is handled within the LRC. As with all the other diagrams in this section, the actions are numbered.



As the request comes into the, it is converted into an instance of the appropriate message object. After this, the message is dropped into the lrc-request sink which will pass it on to the appropriate handler for the request message type*.

In the case of a synchronization point registration request, this is the  class. When passed an instance of, the handler, like all lrc-request handlers, should perform any validation it can from the LRC. Sending messages to the RTI is potentially an expensive task, so as much validation is done on the LRC-side as possible.

Once any local validation has completed, the message is passed to the LRC connection, which will then transfer it to the RTI. How the message is transferred is entirely dependent on the connection implementation. Once the message has been given to the connection, the LRC assumes that it will make its way to the RTI.

calls require return values. In most cases, the return for these calls is, however, the LRC must still block until it receives either confirmation from the RTI that the message was received, or an error is raised. Thus, when a message is handed to the LRC connection, processing will block until the connection is able to return a response.

* All Message Sinks will pass any incoming messages through chain of global pre-process handlers before handing the message off to the appropriate handler. See the Commons Messaging Framework documentation for more information.

Processing User Requests in the RTI
Once the request message has been sent over the connection from the LRC, processing can begin in the RTI. Figure 3 shows the flow of actions this causes.



When a request is received in the RTI connection, it is passed directly on to the rti-request sink where it can be processed. At this point, the appropriate handler makes any additional checks that could not occur on the LRC before deciding what to do next.

Once the RTI handler has validated the request, it has two options:


 * 1) Process the request immediately
 * 2) Queue more work for later processing

The appropriate action depends on the situation. Generally speaking, if a request has the potential to generate any callbacks, new request instances of the appropriate type should be created and placed on the "action queue" where they will be asynchronously processed later. This frees the handler up to generate an appropriate response message that will be passed back to the waiting LRC. Generally, this is just a signal that the request was a success as far as the RTI can tell, but it could include actual values or objects that the LRC could use.

Once the rti-request handler has finished any necessary work and placed onto the action queue any further requests, it returns and the RTI connection can send the response back to the LRC over whatever communication mechanism it is using. It is at this point that processing branches out.

Firstly, the response can now be processed in the LRC (discussed in the next sub-section). Secondly, a separate, dedicated thread within the RTI can process any requests placed on the action queue (discussed further down).

Returning Values from the RTI to the LRC
As shown in Figure 4, when a response is received from the RTI, processing returns to the LRC.



Once the LRC connection has received the response message from the RTI, processing returns to the previously blocked lrc-request handler. At this point, it can inspect the response and take any appropriate actions. If the response was successful, sometime there needs to be information recorded locally. For example, when a federate disabled time regulation, a success message will cause the LRC to record this change in the local  (an object that holds LRC status information). If the response was an error, it may also take any appropriate actions.

After the handler has finished, processing returns to the Message Sink* and then back to the. At this point, the  may extract a value from the response and return it to the federate (such as the federate handle, as is the case when a federate joins a federation). However, as the most common return value for  calls is , it will just return processing to the federate.

This completes processing as far as the federate is concerned. However, within the RTI, further processing could be occurring asynchronously as a result of the additional requests placed on the action queue. It is that processing that will generate  callbacks.

* It is at this point that any global post-process handlers in the Message Sink can run.

Generating Callbacks
Having seen how an LRC request is handled, we must turn our attention to the additional request instances that were created in the RTI and placed on the action queue for deferred, asynchronous processing. It is these requests that are mainly responsible for stimulating the callbacks that are sent asynchronously to the LRC and transformed into  calls.

The first part of this process is shown in Figure 5:



There are two separate activities involved in this process, the first of which is the processing of messages that reside on the action queue. Each active federation within the Portico server contains a thread that is dedicated to removing requests from the action queue when they are available, and passing them to the rti-action sink. This thread is known as the "Action Queue Processor" (AQP).

As soon as messages are available for processing on the action queue, the AQP will extract them and pass them on to the "rti-action" sink. As is the typical process, the requests make their way through the sink to the appropriate handler. It is at this point that the actual action that was originally requested by the federate is taken. In the cause of a synchronization point registration request, the point label is recorded within the RTI and announcement callback messages are created for each joined federate.

Sending callbacks can be an expensive task given the involvement of I/O. As there is only a single AQP thread per federation (in order to reduce synchronization and concurrency problems), the actual sending of the callback messages is completed separately. This frees the AQP up and ensures that it is always available to process more incoming messages, rather than wasting its time with I/O operations.

Thus, when an rti-action sink handler generates a callback (or many callbacks) it is placed on a separate queue. Patrolling this queue is a number of dedicated callback threads known as "Callback Queue Processors" (CQP). The roll of the CQPs is simple: extract messages from the callback queue and send them to the appropriate connection. The exact number of CQPs operating within a federation is unspecified and open to change.

Consuming Callbacks
The process that allows a  callback to be triggered on the LRC-side is shown in Figure 6:



There are actually two processes at work here. The first handles incoming callback messages from the RTI. The second distributes these messages to the  as required.

As callback messages are received by an LRC, they are automatically added to the. This isn't a typical queue, rather, it is designed to meet the message-releasing requirements of the HLA specification. We won't go into details here (see the javadoc) but broadly speaking, it adds behaviour like ensuring that timestampped messages are not released to a constrained federate until the appropriate logical time has been met.

Once the message has been placed on the, a success response is sent back to the RTI and the CQP can move on to the next callback.

The process that converts the message object into an actual  callback should be looking pretty familiar by now. In this case, only the stimulus is different. When the federate is ready to receive callbacks, it notifies the  via the   method. Much like the AQP in the RTI, this method will take the first available message from the  and pass it to a Message Sink for processing (the "lrc-callback" sink in this case). The message will make its way to the appropriate handler, which will gain a reference to the  for the federate and make the appropriate call.

The callback handler may also record any necessary details internally. For example, on a synchronization point announcement callback, it will store the label of the point locally so that when the federate attempts to notify the RTI that it has achieved the point, checks can be carried out to ensure that the point is valid and active.

Summary
This completes the description of how messages flow through the Portico framework. Although it is a topic of considerable length, the same types of constructs and processes are used over and over again, just in different locations.

Where to Next?
This completes the overview of the Portico framework. If you are a developer and want to learn more about how to write code for Portico, we strongly recommend that you look at the Writing Code for Portico article.