mplatt168095

Unit Testing in Boomi

Discussion created by mplatt168095 on Dec 6, 2017

I only recently started using Boomi - and I figured out with how it works, it should be possible to do some reasonable unit testing, which would make the whole tool so much more enjoyable to use.

 

So I thought I'd take one of the examples from the Developer 1 training, and try to make it testable.  It's a very simple process to read Salesforce, decide if a record exists in the database, if it does, to insert it, if it doesn't, to raise an exception.  

 

Prospect Tracking Original Process

 

 

So to make this process be able to be unit tested, we need to be able to supply it with all the data it needs, in addition to it returning all the data we need to save / send.

 

So taking Adam's idea (below), I knew I needed a master process to perform the tasks mentioned above.  That came out as the following:

 

Prospect Tracking - Testable Master Process (Refactored)

 

and the sub-process containing the logic then becomes (SP_Prospect Tracking_Logic):

 

Prospect Tracking - Testable Logic Sub-Process

 

And now the sub-process uses the cache in the New Account? Decision to perform the logic.  This creates a nice by-product (which often seems to happen when re-structuring for unit testing) which is that the data for the lookup of the records is now in a cache, which will perform better in a real world scenario with many records.

The sub-process also returns the data back to the calling process once the data is ready, and calls it "Insert Prospect" instead of directly inserting.

 

So everything is all good so far and still works as originally intended - yes it's a lot more complicated than the original, but now, we want to test all the paths in the SP_Prospect Tracking_Logic process - which means test when a record is new (does not exist in the cache), when a record exists (and an exception gets thrown), and potentially when something unexpected happens (we may want to develop the process later to include better checking, etc)

 

Prospect Tracking - Unit Tests

 

The above unit tests test first that there is nothing in the cache, so you would expect the real process to insert a record.  The second test is for a record that already exists (so the exception is thrown).  The third test I was inserting completely invalid data into the cache, in an attempt to get it to fail at that point.  It didn't, and flowed through to the logic, where it subsequently threw an exception.

 

This exception shows that it is harder to test if you throw exceptions in sub-processes, as there is no context - did the failure occur because the process fail due to some invalid test case, or was it because it was part of the expected flow.

 

The other observation is that with such a simple bit of logic - the unit test process is quite large - it would be great to reduce this to a single component, that allows you to set the properties for unit test setup, teardown, process under test, validation.

 

However, whilst this is useful, in that the whole unit test process will e-mail when there is an error - it could lend itself to a scheduled task to ensure these are run frequently so stop regression, or better use the AtomSphere API to query deployments, and when one changes - kick the tests off.  I want continuous integration if possible.

 

Really interested to hear if anyone has any ideas to take this further.  I thought maybe if we could have a process that we could put together data driven tests that might be worth a look, although generally I prefer the simple to understand flows in the above (complicated data could be easy to get the test cases wrong)

 

I'd like also to look into how difficult it might be to create mocks or stubs for connectors, and also tracking paths through the processes - so we can easily analyse code coverage due to the unit test cases.

 

I'd love to get to a point where you have the choice to do test driven development with integration, that really would be exciting!

 

Respect where it's due - wonderful work guys - The following are documents that I referenced to produce this simple solution:

 

Testing Strategies - Adam Arrowsmith

Document Cache - Best Practices and Common Scenarios - Sneha Mani

Outcomes