Testing Strategies

Document created by Adam Arrowsmith Employee on Nov 19, 2015Last modified by Adam Arrowsmith Employee on Jul 12, 2016
Version 3Show Document
  • View in full screen mode



When planning test cases, it is important to keep in mind the objective of the particular test, which may be to verify the configuration/process within the process, connectivity to endpoints, expected interaction with an external application, expected output, system load, etc. Different test cases are needed to test different aspects of the integrations. Also keep in mind the objective should be to test your specific integration requirements and configuration, not the underlying AtomSphere platform functionality because that is verified by Dell Boomi.


Designing Testable Processes

To facilitate repeatable testing, use a modular, subprocess-oriented design to encapsulate integration process logic into isolated units of work that can be tested and verified independently. This allows you to perform testing of the core orchestration logic without concerning yourself with variables such connectivity and external data/application state, which is especially useful for web service or any listener processes. By encapsulating the core integration logic in a subprocess, it can be invoked directly without needing to go through the web server or other listener framework.


A common conceptual breakdown of the logical pieces of a process might be:



Below is a conceptual example of a web service listener process that invokes a subprocess to perform the core processing logic. By encapsulating the core logic in a subprocess, it can be invoked directly without having to submit an actual web service request through the Atom web server.




Creating Test Harness Processes

A “test harness” can be created by developing a simple integration process that invokes the subprocesses or other components to be tested. The test harness process performs the following tasks:

  1. Obtain/generate known test data and setting state accordingly. This may involve retrieving static test data files via connector or embedding within Message steps in the test harness process itself. Any process or document properties that the subprocess may expect will be initialized in the test harness process.

  2. Invoke the unit of work to be tested, i.e., call the subprocess with the test data.

  3. Confirm the results against a known set of values. This typically involves either comparing the literal results (i.e., Return Documents) of the subprocess along with any process or document properties that may have been set or altered by the subprocess. The comparison or “assertion” can be performed by a Decision or Business Rules step. “Negative” tests may result in the absence of data returned or the throwing of an exception, so these outcomes should be incorporated in the test harness design.

Keeping with the web service example introduced in the previous section, here is an example of simple test harness that invokes the same subprocess containing the core processing logic.




The results returned from that process can then be inspected against a known set of values. Here is a simple example:




The steps used to verify the result will depend on the objective of the test case and specifics of the integration process. A more complex verification process might use a Business Rules step to perform multiple inspections of the result document’s values, or even perform queries to external applications to confirm records or files were created as expected.


Advanced Test Harness Process Ideas

If the test intends to actually interact with an external application or relies on external data to be available for the test case, it is often necessary to perform initialization logic before running the test cases to generate test data. One recommendation is to perform any cleanup from previous tests as part of the initialization logic instead of at the end of the test process. This allows the records and application state to be preserved in the external applications for analysis after running the test.


Another technique is to incorporate multiple test cases into a single process to verify multiple use cases in a single process execution. With this approach the individual verification can write to the process log and a set process property flag if failures are detected. Then after executing the various test cases, the test harness process can inspect the process property and throw an exception for visibility.


Below is an example of a test harness process that performs initialization logic, executes multiple test cases, and checks for any failures.




Creating Mock Web Service Endpoints

When your destination is a web service endpoint, consider developing AtomSphere web service processes to simulate the functional interaction of the real endpoint. Mock endpoint connections are typically used during initial development and volume testing to validate the behavior of your primary process without sending data to the real endpoint. Not actually sending data can be especially desirable for high volume testing.


Ideas and Considerations

  • Design the mock endpoint processes to simulate the behavior of the real endpoint with respect to request and response payloads and HTTP response codes.

  • Create different mock endpoint processes or allow parameters in your design to achieve different outcomes, such as:

    • Technical success (e.g. HTTP 200) and functional success (e.g. no SOAP fault/application warning in response payload)

    • Technical success (e.g. HTTP 200) but functional failure (e.g. SOAP fault/application warning present in response payload)

    • Technical failure (e.g. non-HTTP 200)

  • If used during performance testing, keep in mind the response time of the mock endpoints will likely be different than the real endpoint. The focus of the test should be the execution duration of the primary process itself independent of the connector call.

  • Replace the mock endpoint connections with the real connections before deploying to your main change control environment. Alternatively you could design the process with both the real and mock connections and use a decision step (perhaps evaluating an extended Process Property) to determine which connector step execute.


Example mock endpoint web service that expects a custom HTTP header parameter to determine what type of response to return: success, warning, error.




Automating Testing

To automate testing, the test harness processes can be organized into a “test suite” and executed sequentially by creating a simple “master” process that invokes each of the test harness processes in succession. Test suites can be organized by interface or project to provide granularity for testing and deployment.

After a round of development changes, the master test suite processes can be deployed to the test environment and executed manually. Alternatively they could be scheduled execute automatically, but the remember the test processes will still need to be redeployed to pick up the new configuration changes.


Another approach for automating testing especially when publishing web services is to use an external testing tool such as SoapUI. Test cases and suites can be configured to invoke AtomSphere web services processes with various payloads and execute parallel calls for load testing. With some creativity you can use test harness and utility processes to orchestrate and verify non-web services integrations.


For example, suppose you have a process that extracts known test records from a database and sends them to a web service application. To automate

  1. Execute the process through the AtomSphere Execute Process API or perhaps invoke a web service "wrapper" test harness process that invokes the process to test.

  2. Invoke a second utility web service process that queries the destination application and returns the results once available.

  3. Perform the comparison with SoapUI against a known result set.


Using Boomi Assure for Platform Regression Testing

Automating the test cases greatly reduces the effort required for regression testing of both process configuration changes as well as the AtomSphere platform updates. AtomSphere updates are released monthly so reducing effort for regression testing is important. Greater regression coverage increases confidence for each release.

The Boomi Assure framework is another means to increase regression coverage.


Your use of Boomi Assure provides an additional level of test case coverage for Dell Boomi to include in its new release regression testing. It is intended to identify regression issues between AtomSphere platform updates. It does not identify regression issues between different versions of YOUR process development. It should considered an additional piece of the regression testing toolkit, not a complete substitute for regression testing of your integration processes.


Boomi Assure is included within the AtomSphere platform and allows customers to record and submit process test cases to help Dell Boomi ensure consistency across platform releases. With Boomi Assure, a Test Mode execution is recorded and then can be replayed by Dell Boomi as part of its monthly testing to ensure the same results are obtained with the new AtomSphere code.


A few key concepts about Boomi Assure:

  • It is a voluntary, opt-in feature.

  • The record captures document data that is processed. Care should be made that no sensitive customer data is used for the purposes of the Boomi Assure recording.

  • Customers cannot run the Boomi Assure tests and are not notified of results. If a regression is encountered, Dell Boomi will address it internally ahead of the release or communicate a mitigation plan in rare cases.

  • Boomi Assure does not currently cover 100% of the steps or functionality within processes, most notably connectors. Because Dell Boomi does not have access to your endpoints, it records the inputs/outputs of any connector calls made during the test mode execution, and uses those recordings for the regression tests.


See Boomi Assure for more information.

16 people found this helpful