Why would you want to use the SIMPL paradigm?


SIMPL enabled processes are analogous to Integrated Circuits in hardware designs. Like their hardware counterparts, the software engineer needn't be concerned with the details of the internal workings of a SIMPL process in order to use it. They need only be concerned with the messaging interface. Continuing with the hardware analogy, the SIMPL library would then be the copper traces on the printed circuit board connecting IC's together. In other words the SIMPL library enables SIMPL processes to be connected with each other. From the basic blocks one can begin to construct a set of SIMPL software IC's that can be used in application design.

Many of the same reasons for using IC's in hardware designs apply equally well to using SIMPL paradigm for software design. These include:


Let's examine this by way of a simple example. Suppose the software problem at hand was an application to administer a multiple choice exam like many of us remember as students. Let's imagine that the specification calls for the exam questions to be formated in a simple text data file and that the exam interface is a simple text interface such as:

If you were not using the SIMPL paradigm to design the solution it might be expedient to come up with the monolithic design illustrated below:

If you were following good design practice this program might be nicely decomposed into separate C functions each with their own simple purpose. The distinguishing features of this design would be:

The corresponding SIMPL design might look like:

The SIMPL design might look on the surface to be "overkill" for the application at hand. It may appear that we have introduced complexity by taking a simple single process design and converting it into 5 cooperating SIMPL processes. Let's examine this more closely.

Firstly, we probably have not written significantly more code in the two designs. If we had done our job correctly in the functional decomposition of the problem much of the code would exist verbatim in the two designs.

Secondly, what we would have done is group the functions of like purpose into one of these 5 containers and encapsulate them.

Thirdly, we would have defined a messaging API (probably using a tokenized message scheme) to describe the interactions between each of these 5 processes. This messaging API is far more flexible and extendable than an API based on function parameters can ever be.

When this SIMPL code was being written, we would have written a corresponding set of stimulators and simulators for each of the 5 processes. Since our messaging API is the only way that these processes can interact with each other we can take full advantage of the fact that the individual processes have no way of discovering the true identity of the other process ... any more than a given IC on a circuit board can know what generated its input signal. A stimulator would be a test stub "sender" process designed to unit test a given "receiver". A simulator would be a test stub "receiver" process designed to allow real "sender" processes to be unit tested. One might argue that this is extra code that needs to be written under the SIMPL paradigm that has no counterpart in the monolithic design. Not true ... a good monolithic designer would have a suite of function stubs, special files, special conditional compiles etc. to achieve the unit test objectives.

The real importance of the SIMPL paradigm comes to the forefront when, as inevitably happens with any healthy software project, the requirements are enhanced. Imagine in this case that the original requirement for a stand alone machine gets enhanced to a Web browser applet based interface on a LAN based network. Here the SIMPL design would win hands down in the "ease of extendability" department. As far as the distribution across the network is concerned the deployment verges on "trivial". It would merely be a matter of redeploying exactly the same binary executables on the multiple boxes. In fact the 5 processes themselves could be separated onto separate boxes without having to rewrite or recompile any of the code. As far as adding the applet functionality goes it would simply be a matter of creating a new user interface module as web applet that conformed to the previous messaging API used by the current text interface. Furthermore, all the unit test stimulators for testing the new module would already exist and one could rigorously qualify the new interface module without touching or worrying about any of the other 4 modules in the application. When it came time to deploy the new interface you could be very confident that as long as it responded in the same manner to the previous message API it would "plug and play" with the rest of your application.

Contrast this with what would have happened with your monolithic design. You would have been faced with a complete rewrite of the user interface portion of the code. You would be faced with a completely new addition of a server interface to handle the remote applet calls. Even if you could accomplish that and still preserve the previous function API, you would be faced with a substantially more difficult regression test process to certify your new architecture. And we don't even want to broach the possibility of using both the old interface and the new interface simultaneously.

It should be obvious from this very simple application example that the SIMPL paradigm is a very elegant way to design extendable software. Its is one of those best kept secrets that QNX developers worldwide have known for years. Now the secret is out of the box.

back to SIMPL main page


This project is being coordinated by FC software Inc.