Hi, I'm troubleshooting a very slow interface where I read data from disk on the Boomi server and need to write the data to a third-party SQL database. The Boomi server and the third party SQL server are in different geographic areas. Boomi runs in North America & SQL is in South America. The payroll vendor's database and my organization are on a VPN.
I think the issue is a network bottleneck as I'm only sending 6,000 records into a single table. Each record has approximately 80 columns. Total time for this process to complete is approximately 18 minutes. The CSV document, on disk, is approximately 3.2 mb in size.
Here is my flow:
Data is read from a CSV that is on the C drive of the Boomi Atom. Each line is a unique employee record.
I split ~6,000 records into batches of 25:
I cleanse the data prior to mapping it to a database profile.
The map is very straightforward. No custom functions, no transformations at all. It is strictly a mapping from left side to right side:
Once mapped, I combine the map output SQL profile into 1 document:
The database profile is based on a dynamic insert:
The database write operation commits by profile with a batch size of 0.
Connection pooling on the database connector is not enabled. One thing to note is that the SQL server I am connecting to is SQL Server 2000.
Looking at some of the information on Boomi's site I decided to split the incoming document into batches, combine the map output into 1 document and commit by profile with a batch count of 0 to do 1 commit at the end of the process. I think that's what I'm doing here. When looking at the Process State, 95% of the time spent is on the database connector. The cleanse shape takes ~10 seconds, the map shape takes < 10 seconds.
I'm wondering how the process works. Because I am combining the documents prior to the database shape and am committing by profile with a batch size of 0, does this mean Boomi will do 6,000 insert statements and then do 1 commit statement? Is there a pause between the insert statements? Does Boomi wait for any acknowledgements or anything else that would explain the long time it takes to write 6,000 records?
Again, I suspect a network issue but would like to know if there is any back-and-forth communication between Boomi and the database that I'm not aware of that could be eating up a lot of the time.