The most basic SIMPL processes types are:
for some ICs (~35k) (installation instructions
for ICs )
The item being tested here is the sender. The key to understanding simulators is understanding the fact that provided that the simulator conforms to the SIMPL naming convention expected at the sender and conforms to all the expected message formats that the sender can exchange: the sender will not be able to detect that it is talking to a test stub. As such the sender process can be vigorously tested in a very realistic test "sandbox" without having to alter the deployed executable in any fashion. There is no need for conditional compiles, test flags etc. that are typical of unit test scenerios in non SIMPL designs. Once tested, the sender executable can be deployed as is in the final application.
The exact composition of the simulator code is highly dependent on the application. The diagram above illustrates a typical scenerio whereby one desires to have the ability to interact with the simulator directly via keyboard commands. In addition the canned responses are being fed in from a datafile.
One can imagine more sophisticated simulators where the whole test sequence is metered in from the data file in a highly controllable manner.
The item being tested here is the receiver. As was the case with the simulators above the key here is that provided that the stimulator conforms to all messaging and naming conventions the receiver process will have no way of knowing that it is being sent a message from a stimulator or from the real sender in the final application.
As was the case with the sender process in the simulator example the receiver under test here can be the final deployable executable in all respects. Once again no conditional compilation or other executable altering techniques are required in the SIMPL paradigm.
As with the simulator, the typical stimulator contains a keyboard interface for the tester to interact with. More sophisticated stimulators may feed the test input from a data file.
The importance of being able to test deployable executables in a SIMPL application cannot be emphasized enough. In our experience this is one of the most important reasons for considering the SIMPL paradigm in designing software applications.
The basic relay operation is quite easy to grasp. The sender thinks that the relay process is the intended receiver for its messages. It does all the normal name locate and send operations as if it were part of a simple sender/receiver pair. The relay process on the other hand does nothing at all with the message. It simply copies it through to the the registered receiver process. When the receiver gets the relayed message it retrieves the message in the normal manner. Once the message is processed the receiver places the reply in the relay's reply area and replies the sender ID back to the relay process. The relay process then simply copies the reply back to the sender in the normal manner.
The advantages of this construct over a basic sender/receiver pairing lie in the name hiding that can occur with the receiver. It is also possible to dynamically start and stop receivers in this scheme without having to recommunicate naming information to the various senders in the system. If that occurs the start up message exchange called registration in the diagram above takes care of notifying the relay task of the new receiver's name information.
The ability to dynamically start and stop processes without cycling the whole application can be a significant advantage particularly if the receiver logic is undergoing frequent upgrades or enhancements. These can be dynamically rolled in and if problems occur the original copy can quickly be rolled back into play. In fact with the registration scheme both receiver processes could be running and a quick message exchange will have the effect of "routing" messages to the new receiver (or back again). With the registration scheme the receiver in question can override an existing registration. While some may view this type of thing as a potential security "hole" it is only open to someone with privileges for running a new process on the system. It is relatively easy to build in a certificate type of check on top of the registration process to close this hole considerably if that is an issue.
Obviously the relay will incur a performance penalty over the straight message exchange, but in lots of circumstances the advantages which come with the construct outweigh the downside. The relay is a powerful SIMPL construct.
The stack IC makes use of the Receive - Relay construct in its upper layers to hold the sender blocked throughout the transaction. More efficient stack algorithms make use of the special form of the Receive call whereby the message is left in and manipulated directly in the shared memory buffer. The last process in the stack would do the Reply to unblock the message originator. The sender sees the entire stack as if it were a straight SIMPL receiver. Most often the entire stack resides on a single network node, but this not a requirement imposed by SIMPL. Since each layer in the stack is SIMPL process there is no technical reason that they couldn't be distributed across several network nodes.
To utilize the stack, the sender code needs no special knowledge. It simply composes and sends a message as if it were communicating with a straight sender-receiver pairing. The sender believes that the lead process in the stack is the receiver processing the messages. As such that is the only SIMPL name the sender needs to know.
The code in each layer of the stack must, by definition, be aware of its position in the stack. Each layer is responsible for a particular aspect of the composition of the overall transaction message. That message must flow through the various layers in the correct order. If the message is modified insitu in the shared memory the code is only marginally different from that associated with a straight SIMPL receiver.
Each of the stack layers is a separate SIMPL process and because context switching must occur as the message flows through each layer, there will be a performance penalty over combining all the logic into a single receiver. The advantages of modular design, usually outweigh such considerations in SIMPL systems where the stack IC would be employed.
To understand the agency one needs to understand the concept of reply blocking. In a normal SIMPL message exchange the receiver is receive blocked. Once the sender sends a message and is then said to be reply blocked. The key to the agency construct is that the receiver does not need to reply right away to that particular sender. It can simply remember the ID and go on about its business. In fact the receiver can "hold" the sender waiting and go back to being receive blocked for a new message. When new information arrives via a message from a second sender the receiver could chose to reply to the original sender with that information using its previously remembered ID. Another way to look at a reply blocked sender is as a "receiver" who doesn't block his "sender".
To avoid some confusion of symantics, we have adopted the naming convention for the agency processes as per the diagram above. The requestor is simply another name for the normal sender. As far as this process is concerned the intended receiver for the message is the agency itself. The agency process however is completely neutral to the actual message content. It is simply going to act as "store and forward" for the requestor(s) messages. It is important to note that with the basic SIMPL package all "sender" type processes place their message in a block of shared memory which they own and control. The actual message does not need to be copied out of the sender's buffer by the receiver but can be read directly by linking to the shared memory area. The agency construct takes advantage of this fact. When the requestor sends a message to the agency the agency does not copy the message anywhere. It simply notes the ID of the requestor and does one of two things:
To the requestor it all went exactly as if it had been sending any SIMPL message to a basic receiver. In fact there is no difference in the requestor code for dealing with agencies. Why then go to all this trouble?
First of all, it is now possible to dynamically start and stop the agent process in this system without affecting the requestor (other than delaying responses to a request that arrived while the agent was being cycled). In systems where the agent might be undergoing significant revisions or upgrades, this might be a distinct advantage.
Secondly, the requestor in this system does not need to know the name of the agent in order to exchange a message with him. The agency construct can be viewed as a message gateway.
To understand the further advantages we need to examine the case where we may have multiple requestors all talking to the same agency and agent. In this scenerio the agency will actually receive all the requestor's messages and will queue their originators ID's. The agency logic can then be in control of the order in which these messages are dispatched to the agent. In a normal sender/receiver pairing the fifo imposes a first in first out ordering and it is not possible to have a higher priority message jump ahead in the queue. In the agency scheme this is very possible.
In addition, in the normal SIMPL sender/receiver pairing the messaging is synchronous. It is intentionally difficult to kick a sender out of a reply blocked state other than by having the receiver do a reply. This means things like timeouts or "aged data" are difficult to handle. The agency scheme makes these things relatively easy to manage. While messages are pending in the agency queue the agency can be kicked into examining these periodically for timeouts or aging.
The agency construct will suffer a performance penalty when compared
against a basic sender/receiver pair because at least 2 extra messages
need to be exchanged in each transaction. The agency construct, however,
is a powerful one and can be used to great advantage in certain designs.
A typical example would involve a user interface process. Typically user interfaces, be they simple text based interaction or GUI's want to be receiver type processes. It is not often that you would want the user interface to block on a send. Very often in these designs the user interface (UI) requires information from another receiver process. If you went ahead and coded a blocking send into the UI then you could potentially have a place in the operation of the UI where the interface would "freeze" while the request was being serviced. This may not be the desired behavior.
The courier construct takes advantage of the delayed reply concept illustrated in the agency construct above. In our discussion we will assume that the UI process is "receiver1" and the recipient process is "receiver2". When the courier process is started the first thing it does is locate the UI process it is designated to service. Once located, the courier will send a registration type message to that process indicating that it is ready for action. The UI process will simply note that the courier is available and not reply, thereby leaving the courier reply blocked. At the point in the UI where the asynchronous request to the receiver2 process needs to be accomplished a message is composed and sent (replied) via the courier. The courier is now unblocked and procedes to locate and forward the message to the receiver2 process using a blocking send. At this point the courier is reply blocked on receiver2 and the UI is completely free to do other things as permitted by its logic. When receiver2 replies to the courier, the courier simply forwards that reply on to the UI process using a blocking send and once again becomes reply blocked on the UI. The UI receives this message in the normal manner, notes that it came via the courier, marks that the courier is once again available and processes the message in accordance with the logic coded.
This simple courier described above is a single request version. If a second UI request intended for the receiver2 process is generated within the UI before the courier returns its first response that request will be refused siting the "busy courier". A simple enhancement to this single request logic is to have a single message queuing capability in the UI. The "busy courier" response then would only come if a third UI request is attempted before the original response is received. In most UI processes this single message queue is more than adequate. A larger queue depth algoithm could be constructed readily, but the need for this is often indicative of a poor UI design elsewhere.
Another variation on the courier model is to have a parent process fork the couriers on demand. In some cases this capability is more desirable than having the courier prestarted along with the GUI process. The web applet type GUI applications are examples where this courier spawning technique is desirable.
Especially in user interface designs, the courier construct is a very
useful SIMPL building block indeed.
The broadcaster actually consists of two parts: a receiver part and a sender part. We call the sender part the broadcaster. The receiver part is typically a message queue as we shall see shortly. It works in the following manner. The queue looks after message queuing and sequencing. The broadcaster maintains a list of processes to send to.
A typical sequence may start as follows. A receiver (say receiver1) decides that it wishes to receive broadcast messages. As part of that sequence it sends a registration type message to the broadcaster's queue process. The queue will then place a REGISTRATION type message onto its internal queue. Meanwhile the broadcaster returns from one of its broadcast sequences by sending a message down to the its queue process asking whether there are any new messages queued. In this example the REGISTRATION message for receiver1 is delivered as a reply to the broadcaster. When the broadcaster process detects that the message is a new REGISTRATION, it does a nameLocate on that the recipient (receiver1 in this example) and stores the ID in its internal broadcast list. It sends a confirmation message back to the queue process who then proceeds to reply and unblock the original receiver (receiver1 - who was temporarily a sender). If there were no more messages on the internal queue the broadcaster would simply be left reply blocked at this stage. At this point the sender may send a message to the broadcaster's queue process that is intended for broadcast. Typically the queue would queue the message and reply immediately to the sender but one could do a blocking send scheme similar to that of the registration process. If the queue detects that the broadcaster is reply blocked it immediately forwards the message via a reply to the broadcaster. Once the broadcaster gets the message it notes that this is not a registration and therefore is a message to be sent to all the registered recipients in its broadcast list. Once this series of sends is complete the broadcaster will send back to the queue for the next message and the process repeats.
When a recipient wishes to cancel its registration with the broadcaster, it simply repeats the registration process with a DEREGISTER message to the queue. It is typical that the queue would simply queue and acknowledge this request.
If a recipient "forgets" to deregister and simply vanishes the next broadcast attempt will detect that condition and the broadcaster would procede to remove that ID from its internal broadcast list.
The broadcaster construct is a very powerful SIMPL tool. A typical example
of its use would be to synchronize multiple instances of a GUI applet with
the same information.
In the figure above we are showing the emitter connected to a queuing receiver process. While this is strictly not necessary you will want to take care that all Send()s from the polling emitter are blocked for the minimum amount of time.
A typical sequence may start as follows. The queue and the emitter are started up. The emitter name_locate()s the queue and Send()s an initialization message indicating to the queue where the shared memory block is. The queue process will Reply() back to unblock the emitter and establish a connection to the emitter's shared memory.
While this shared memory could contain any structured data which the queue and the emitter agree upon. eg. a table of serial devices which are to be polled. We are going to restrict this example to the simplest of configurations where the shared memory contains a single "call home" flag. The queue process will be the changer of this flag and the emitter process will only read it.
Inside the emitter there is a loop which is endlessly cycling around all the hardware it is supposed to be polling. In the sample code we are going to open a file and then loop around in a 1 sec interval checking on the status of that file. If a change has been made we will reread the file and send its contents on to the queue. Each pass through that loop we are going to have the emitter check on the flag located in the shared memory area. If that flag is set we are going to interupt the polling and Send() a CALL_HOME tokenized message to the queue process. This gives the queue process an opportunity to stuff something into the Reply() and then clear the flag.
In this manner we can demonstrate the polling emitter looping around checking on the simulated hardware (a file) and emitting any changes that it observes in that simulated hardware. We can interupt the polling loop at any time and have the emitter call home to receive some data back from the queue. Presumably this would be a message destined for the hardware that the emitter is polling.
You could accomplish this quite readily by adopting a derivative of
the agency software IC described above.
In the figure above we are showing the scheduler as an agency type receiver. We are showing the simplest configuration where the scheduler agency is connect to a single agent process. The agent is Reply() driven from the scheduler queue.
The requestor is a test stub which deposits the message + scheduling info into the queue.
The viewer is a console viewer of the messages currently queued in the scheduler. Although this is illustrated with a console process it could very well be a SIMPL enabled GUI process as well.
The main looping sequence in the scheduler is kicked off by the onboard timer. Each click of that timer spawns the following activities:
The message to be delivered to the agent is treated as a package of bytes by the scheduler, to be queued until the time comes to forward it on the agent. The agent in this scheme only sees this package of bytes with all the scheduling info stripped away before it is forwarded.
Obviously, in the interest of simplicity this scheduler lacks some desired features, that could readily be added:
The proxy IC is designed to do just that.
The proxy IC makes use of intimate knowledge of the workings of SIMPL fifos to achieve a transparent relaying of messages from a sender to a one of the prestarted receivers.
In the figure above we are showing the proxy as a special SIMPL receiver. In addition to the regular SIMPL receive fifo the proxy has a second special fifo to which the prestarted receiver processes connect.
The sender name_locate()s the proxy, composes and Send()s a message in the normal manner. The sender has no knowledge of the prestarted receivers in behind the proxy.
The proxy uses a select() call to mulitplex on both fifos. In the event of traffic on the SIMPL fifo the proxy doesn't do a SIMPL Receive() but instead reads the SIMPL fifo directly. The 32 bit word on that SIMPL fifo will be the shared memory ID for the patch of memory containing the actual message. The relay is done by simply copying this shared memory ID over to the SIMPL receive fifo for one of the prestarted receivers. As such that prestart receiver sees it as a normal message originating from the sender directly. The receiver then processes this message as if it came directly from the sender. ie. the Reply() goes directly back and the sender remains blocked until that Reply() message is received.
To achieve a simple queuing of prestarted receivers, a special fifo owned by the proxy is used. The prestarted receivers place an ID on this fifo to indicate that they are available to process transactions. In the example code the receiverID is derived from the SIMPL name. eg. RECV_01 has the ID=1. In this manner the proxy can easily recompute the SIMPL name required for the name_locate() call to open the file descriptor to the prestarted receivers' SIMPL fifos.
State machines are by definition very customized to the problem they
are trying to represent. As such it is very
difficult to build a general purpose state machine that works for many classes of problems.
Instead this softwareIC takes the approach of a source code framework.
The SM_common directory contains the basic state machine infrastructure and a definition of an API to that infrastructure.
The SM_door subdirectory contains the specific state machine logic and implementations of the state machine API for a very simple 4 state door.
The idea is that for another type of system the SM_common stays and the SM_whatever is created which results in a new type of executable. In this manner the SM_common code can be shared across several different executables.
The code in this demo is represented by two executables illustrated below:
The main statemachine is mainly a SIMPL receiver, but occasionally a SIMPL sender.
On the Receive() port this statemachine can accomodate two classes of messages:
The eventStim is also configured to Receive() any ALARM messages that the statemachine issues, which in the case of our simple door is associated with the door being held open beyond a specified time window.
The statemachine logic can accomodate many different doors in its datastore.
The datastore itself is fed from a simple tag, value paired text file. To make this datastore more embedded friendly we have employed the concept of a single block of dynamically allocated memory which is then subdivided into memory pools.
This project is being coordinated by FC software Inc.