On the surface, the Process Call step allows you to execute or pass data to another Process as part of your integration workflow. However upon closer look, this step can enable you to incorporate some sophisticated design strategies into your integrations including:
- Creating repeatable process logic
- Sequencing process executions
- Grouping documents from multiple sources
- Unit testing
Keep reading to learn how the Process Call step can help streamline your integration development.
Note: The Process Call step is only available in Professional and Enterprise Editions as part of Advanced Workflow.
Before diving right into the use cases, a quick refresher on three important concepts to understand when working with Process Call steps and sub-Processes:
- “Wait for process to complete?” - This option on the Process Call step determines whether the sub-Process is executed “synchronously” or “asynchronously”. In other words, should the calling Process wait for the sub-Process to complete before continuing on to its next step or just kick off the sub-Process and immediately continue to the next step.
- The sub-Process’ Start Step - If the sub-Process’ Start Step is configured as “Data Passthrough”, the set of Documents from the calling Process will be passed directly into the sub-Process to continue executing. In this case, the sub-Process essentially functions as a continuation of the calling Process. However if the Start Step is configured as “Connector” or one of the other options, the sub-Process will retrieve a new set of Documents. Also, if not configured as “Data Passthrough” the sub-Process will execute once for each Document in the calling Process that reaches the Process Call step. This means if ten Documents reach the Process Call step, the sub-Process will execute ten times.
- Return Documents step - This step typically goes hand-in-hand with the Process Call step. It passes Documents from the sub-Process back to the calling Process to continue executing. One very important nuance to its behavior is that Documents are returned to the calling Process only after the sub-Process has completed. This means that Documents that reach a Return Documents step early in the sub-Process will wait there until the rest of the sub-Process completes, then they will be passed back to the calling Process.
Now armed with that understanding, let’s look at some of the clever things you can do with the Process Call step.
1. Create Repeatable Process Logic
The AtomSphere development environment encourages reuse through its modular, component-based architecture (think Connections, Profiles, Map Functions, etc.). The same can apply to Process steps as well. If you find yourself adding the same series of steps to Processes over and over, consider putting those common steps in a sub-Process and then use the Process Call step to reference those steps wherever needed. Make sure the sub-Process’ Start Step is configured as “Data Passthrough” to pass Documents from the calling Process into the sub-Process.
Looking for opportunities to create reusable sub-Processes--even if it’s only a few steps--can facilitate maintenance, improve readability and organization, and enable more fine-grained unit testing. Some examples of situations that often lend themselves to common sub-Processes include:
- Connector response handling - Check for success/fail response codes and routing accordingly
- Custom archiving, logging, or notifications - Standardize file names, log messages, and alert formats.
- Working with a standard or “canonical” format - Individual Processes transform disparate data to a common format then pass to a common sub-Process to continue execution.
Here’s an example of a simple Connector response handling sub-Process that interrogates the application response to determine success or failure, generate a consistent alert notification, and return the Document to the calling Process accordingly.
Tip: When possible make your sub-Processes even more reusable by configuring the steps to inspect User Defined Document Properties instead of Profile Elements. This allows the sub-Process to be called from Processes with different object types or even data formats.
2. Sequence Process Executions
If you have a series of Process that must run in a specific order, you can use a “master” Process to execute each Process sequentially. In the Process Call step configuration, check “Wait for process to complete” so each Process will wait until the previous Process is complete before starting. Use the “Abort if process fails” option to control whether the next Process should still execute if the previous Process failed or not.
Then you only need to set one schedule for the “master” Process and each sub-Process will execute as soon as it can. This is often easier than trying to stagger schedules, especially for Processes whose execution times may vary.
Deployment Note: Technically you only need to deploy the “master” Process however if you want to be able to retry Documents you must deploy each sub-Processes as well.
3. Group Documents from Multiple Sources
Sub-Processes can be used to group Documents from multiple sources together so you can operate upon them as a single group later in the Process to do things like combining, splitting, and take advantage of Connector batching. This use case is probably best illustrated with an example.
Let’s say you need to extract both direct and indirect customer records from some web application and create a single CSV file. You need to query for each type of customer however the application’s API does not allow you to use a compound filter such as “WHERE customerType=‘DIRECT’ OR customerType= ‘INDIRECT’”. This means you will need to extract each type with a separate query call. The Process might look something like this:
The problem here is that two different files will be created because each Branch path will execute to completion, meaning the “DIRECT” customers retrieved on Branch 1 will be mapped, combined (Data Process step), and sent to the FTP server before the “INDIRECT” customers are retrieved on Branch 2. So how can you get both groups of Documents together so they can be combined into a single file?
Return Documents step to the rescue! By moving the Connector calls into a sub-Process, you can rely on the fact that the Return Documents step will wait until the Process is complete (remember the three concepts above?) to return a single group of Documents to the calling Process. The single group of Documents will then continue down the calling Process together and can be mapped and combined into one CSV file. The calling Process will now look like this:
...and the new sub-Process will look like this:
4. Unit Testing
Process Call steps can also help with unit testing by separating the source of inbound data from the rest of the Process workflow. For example, you can have one Process that performs the “real” Connector call to retrieve data from the intended source and another Process that retrieves test data staged somewhere such as a local disk folder (or maybe even contained directly in the Process using a Message step). Then each of these simple Processes can pass the retrieved Documents to the same “main” Process containing the bulk of the workflow steps.
This is especially useful for testing integrations that use event-based Connectors that cannot be run in Test Mode such as Web Services Server and AS2.
Below is an example of two Processes, one that actually listens for incoming data via the Web Services Server and one that retrieves staged test data from a local Disk location. Both then pass the Documents to the common “Create Order” sub-Process that performs the rest of the integration. The “test” Process can be run in Test Mode to assist during development before deploying the actual listening Process for production use.